Jan 20 01:40:11.182063 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 20 01:40:11.182086 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 19 23:25:42 -00 2026 Jan 20 01:40:11.182094 kernel: KASLR enabled Jan 20 01:40:11.182100 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 20 01:40:11.182107 kernel: printk: bootconsole [pl11] enabled Jan 20 01:40:11.182112 kernel: efi: EFI v2.7 by EDK II Jan 20 01:40:11.182120 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 20 01:40:11.182126 kernel: random: crng init done Jan 20 01:40:11.182132 kernel: ACPI: Early table checksum verification disabled Jan 20 01:40:11.182138 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 20 01:40:11.182144 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182150 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182158 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 20 01:40:11.182164 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182171 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182177 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182184 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182192 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182198 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182205 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 20 01:40:11.182211 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182217 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 20 01:40:11.182224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 20 01:40:11.182230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 20 01:40:11.182236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 20 01:40:11.182242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 20 01:40:11.182249 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 20 01:40:11.182255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 20 01:40:11.182263 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 20 01:40:11.182269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 20 01:40:11.182276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 20 01:40:11.182282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 20 01:40:11.182288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 20 01:40:11.182295 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 20 01:40:11.182301 kernel: NUMA: NODE_DATA [mem 0x1bf7f1800-0x1bf7f6fff] Jan 20 01:40:11.182307 kernel: Zone ranges: Jan 20 01:40:11.182314 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 20 01:40:11.182340 kernel: DMA32 empty Jan 20 01:40:11.182347 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:40:11.182353 kernel: Movable zone start for each node Jan 20 01:40:11.182364 kernel: Early memory node ranges Jan 20 01:40:11.182371 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 20 01:40:11.182378 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 20 01:40:11.182385 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 20 01:40:11.182391 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 20 01:40:11.182399 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 20 01:40:11.182406 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 20 01:40:11.182413 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:40:11.182420 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 20 01:40:11.182427 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 20 01:40:11.182433 kernel: psci: probing for conduit method from ACPI. Jan 20 01:40:11.182440 kernel: psci: PSCIv1.1 detected in firmware. Jan 20 01:40:11.182447 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 01:40:11.182453 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 20 01:40:11.182460 kernel: psci: SMC Calling Convention v1.4 Jan 20 01:40:11.182467 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 20 01:40:11.182473 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 20 01:40:11.182481 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 20 01:40:11.182488 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 20 01:40:11.182495 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 01:40:11.182502 kernel: Detected PIPT I-cache on CPU0 Jan 20 01:40:11.182508 kernel: CPU features: detected: GIC system register CPU interface Jan 20 01:40:11.182515 kernel: CPU features: detected: Hardware dirty bit management Jan 20 01:40:11.182522 kernel: CPU features: detected: Spectre-BHB Jan 20 01:40:11.182528 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 01:40:11.182535 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 01:40:11.182542 kernel: CPU features: detected: ARM erratum 1418040 Jan 20 01:40:11.182549 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 20 01:40:11.182557 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 01:40:11.182563 kernel: alternatives: applying boot alternatives Jan 20 01:40:11.182572 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=93b7c0065a09ec71bf84c247be021b0de512ae4ddd93f3ff0c2b7b260332752d Jan 20 01:40:11.182579 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:40:11.182586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:40:11.182592 kernel: Fallback order for Node 0: 0 Jan 20 01:40:11.182599 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 20 01:40:11.182605 kernel: Policy zone: Normal Jan 20 01:40:11.182612 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:40:11.182619 kernel: software IO TLB: area num 2. Jan 20 01:40:11.182626 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 20 01:40:11.182634 kernel: Memory: 3982644K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211516K reserved, 0K cma-reserved) Jan 20 01:40:11.182641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 01:40:11.182648 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:40:11.182655 kernel: rcu: RCU event tracing is enabled. Jan 20 01:40:11.182662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 01:40:11.182669 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:40:11.182676 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:40:11.182683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:40:11.182689 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 01:40:11.182696 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 01:40:11.182703 kernel: GICv3: 960 SPIs implemented Jan 20 01:40:11.182711 kernel: GICv3: 0 Extended SPIs implemented Jan 20 01:40:11.182717 kernel: Root IRQ handler: gic_handle_irq Jan 20 01:40:11.182724 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 20 01:40:11.182731 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 20 01:40:11.182737 kernel: ITS: No ITS available, not enabling LPIs Jan 20 01:40:11.182744 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:40:11.182751 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 01:40:11.182758 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 20 01:40:11.182764 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 20 01:40:11.182771 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 20 01:40:11.182778 kernel: Console: colour dummy device 80x25 Jan 20 01:40:11.182787 kernel: printk: console [tty1] enabled Jan 20 01:40:11.182794 kernel: ACPI: Core revision 20230628 Jan 20 01:40:11.182801 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 20 01:40:11.182808 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:40:11.182815 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 01:40:11.182822 kernel: landlock: Up and running. Jan 20 01:40:11.182829 kernel: SELinux: Initializing. Jan 20 01:40:11.182836 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:40:11.182842 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:40:11.182851 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:40:11.182858 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:40:11.182865 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 20 01:40:11.182872 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 20 01:40:11.182879 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 20 01:40:11.182886 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:40:11.182893 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:40:11.182900 kernel: Remapping and enabling EFI services. Jan 20 01:40:11.182913 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:40:11.182920 kernel: Detected PIPT I-cache on CPU1 Jan 20 01:40:11.182927 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 20 01:40:11.182935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 01:40:11.182944 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 20 01:40:11.182951 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 01:40:11.182959 kernel: SMP: Total of 2 processors activated. Jan 20 01:40:11.182966 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 01:40:11.182973 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 20 01:40:11.182982 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 01:40:11.182990 kernel: CPU features: detected: CRC32 instructions Jan 20 01:40:11.182998 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 01:40:11.183005 kernel: CPU features: detected: LSE atomic instructions Jan 20 01:40:11.183012 kernel: CPU features: detected: Privileged Access Never Jan 20 01:40:11.183019 kernel: CPU: All CPU(s) started at EL1 Jan 20 01:40:11.183026 kernel: alternatives: applying system-wide alternatives Jan 20 01:40:11.183034 kernel: devtmpfs: initialized Jan 20 01:40:11.183041 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:40:11.183050 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 01:40:11.183057 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:40:11.183065 kernel: SMBIOS 3.1.0 present. Jan 20 01:40:11.183072 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 20 01:40:11.183079 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:40:11.183087 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 01:40:11.183094 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 01:40:11.183102 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 01:40:11.183109 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:40:11.183117 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 20 01:40:11.183125 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:40:11.183132 kernel: cpuidle: using governor menu Jan 20 01:40:11.183140 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 01:40:11.183147 kernel: ASID allocator initialised with 32768 entries Jan 20 01:40:11.183154 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:40:11.183161 kernel: Serial: AMBA PL011 UART driver Jan 20 01:40:11.183169 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 01:40:11.183176 kernel: Modules: 0 pages in range for non-PLT usage Jan 20 01:40:11.183185 kernel: Modules: 509008 pages in range for PLT usage Jan 20 01:40:11.183192 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:40:11.183199 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:40:11.183206 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 01:40:11.183214 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 01:40:11.183221 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:40:11.183228 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:40:11.183236 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 01:40:11.183243 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 01:40:11.183252 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:40:11.183259 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:40:11.183267 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:40:11.183274 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:40:11.183281 kernel: ACPI: Interpreter enabled Jan 20 01:40:11.183288 kernel: ACPI: Using GIC for interrupt routing Jan 20 01:40:11.183296 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 20 01:40:11.183303 kernel: printk: console [ttyAMA0] enabled Jan 20 01:40:11.183311 kernel: printk: bootconsole [pl11] disabled Jan 20 01:40:11.183324 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 20 01:40:11.183332 kernel: iommu: Default domain type: Translated Jan 20 01:40:11.183339 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 01:40:11.183346 kernel: efivars: Registered efivars operations Jan 20 01:40:11.183354 kernel: vgaarb: loaded Jan 20 01:40:11.183361 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 01:40:11.183368 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:40:11.183375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:40:11.183383 kernel: pnp: PnP ACPI init Jan 20 01:40:11.183392 kernel: pnp: PnP ACPI: found 0 devices Jan 20 01:40:11.183399 kernel: NET: Registered PF_INET protocol family Jan 20 01:40:11.183406 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:40:11.183414 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:40:11.183421 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:40:11.183429 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:40:11.183436 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:40:11.183443 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:40:11.183451 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:40:11.183459 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:40:11.183467 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:40:11.183474 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:40:11.183481 kernel: kvm [1]: HYP mode not available Jan 20 01:40:11.183488 kernel: Initialise system trusted keyrings Jan 20 01:40:11.183495 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:40:11.183503 kernel: Key type asymmetric registered Jan 20 01:40:11.183510 kernel: Asymmetric key parser 'x509' registered Jan 20 01:40:11.183517 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:40:11.183525 kernel: io scheduler mq-deadline registered Jan 20 01:40:11.183533 kernel: io scheduler kyber registered Jan 20 01:40:11.183540 kernel: io scheduler bfq registered Jan 20 01:40:11.183547 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:40:11.183554 kernel: thunder_xcv, ver 1.0 Jan 20 01:40:11.183562 kernel: thunder_bgx, ver 1.0 Jan 20 01:40:11.183569 kernel: nicpf, ver 1.0 Jan 20 01:40:11.183576 kernel: nicvf, ver 1.0 Jan 20 01:40:11.183703 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 01:40:11.183774 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T01:40:10 UTC (1768873210) Jan 20 01:40:11.183785 kernel: efifb: probing for efifb Jan 20 01:40:11.183792 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 20 01:40:11.183799 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 20 01:40:11.183807 kernel: efifb: scrolling: redraw Jan 20 01:40:11.183814 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 01:40:11.183821 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:40:11.183829 kernel: fb0: EFI VGA frame buffer device Jan 20 01:40:11.183837 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 20 01:40:11.183845 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 01:40:11.183852 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 20 01:40:11.183859 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 20 01:40:11.183867 kernel: watchdog: Hard watchdog permanently disabled Jan 20 01:40:11.183874 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:40:11.183881 kernel: Segment Routing with IPv6 Jan 20 01:40:11.183888 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:40:11.183896 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:40:11.183904 kernel: Key type dns_resolver registered Jan 20 01:40:11.183912 kernel: registered taskstats version 1 Jan 20 01:40:11.183919 kernel: Loading compiled-in X.509 certificates Jan 20 01:40:11.183926 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 78d001f5b2e422df1e406698b80c7183ecdd19cf' Jan 20 01:40:11.183933 kernel: Key type .fscrypt registered Jan 20 01:40:11.183940 kernel: Key type fscrypt-provisioning registered Jan 20 01:40:11.183947 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:40:11.183955 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:40:11.183962 kernel: ima: No architecture policies found Jan 20 01:40:11.183971 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 01:40:11.183978 kernel: clk: Disabling unused clocks Jan 20 01:40:11.183985 kernel: Freeing unused kernel memory: 39424K Jan 20 01:40:11.183992 kernel: Run /init as init process Jan 20 01:40:11.183999 kernel: with arguments: Jan 20 01:40:11.184006 kernel: /init Jan 20 01:40:11.184013 kernel: with environment: Jan 20 01:40:11.184021 kernel: HOME=/ Jan 20 01:40:11.184028 kernel: TERM=linux Jan 20 01:40:11.184037 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:40:11.184048 systemd[1]: Detected virtualization microsoft. Jan 20 01:40:11.184056 systemd[1]: Detected architecture arm64. Jan 20 01:40:11.184063 systemd[1]: Running in initrd. Jan 20 01:40:11.184071 systemd[1]: No hostname configured, using default hostname. Jan 20 01:40:11.184078 systemd[1]: Hostname set to . Jan 20 01:40:11.184087 systemd[1]: Initializing machine ID from random generator. Jan 20 01:40:11.184096 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:40:11.184104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:11.184112 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:11.184120 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:40:11.184128 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:40:11.184136 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:40:11.184144 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:40:11.184153 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:40:11.184163 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:40:11.184171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:11.184179 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:11.184186 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:40:11.184194 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:40:11.184202 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:40:11.184210 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:40:11.184218 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:40:11.184227 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:40:11.184235 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:40:11.184243 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 01:40:11.184250 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:11.184258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:11.184266 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:11.184274 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:40:11.184282 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:40:11.184291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:40:11.184299 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:40:11.184307 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:40:11.184314 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:40:11.184328 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:40:11.184350 systemd-journald[217]: Collecting audit messages is disabled. Jan 20 01:40:11.184371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:11.184379 systemd-journald[217]: Journal started Jan 20 01:40:11.184398 systemd-journald[217]: Runtime Journal (/run/log/journal/a39a58b1e84f42aa82db316d7115f6c6) is 8.0M, max 78.5M, 70.5M free. Jan 20 01:40:11.185424 systemd-modules-load[218]: Inserted module 'overlay' Jan 20 01:40:11.200860 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:40:11.209333 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:40:11.211349 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:40:11.222876 kernel: Bridge firewalling registered Jan 20 01:40:11.218059 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 20 01:40:11.218869 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:11.228750 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:40:11.236029 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:11.246244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:11.269634 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:11.281469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:40:11.294335 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:40:11.310406 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:40:11.322422 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:11.328377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:11.339634 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:40:11.349742 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:11.372544 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:40:11.383759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:40:11.392481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:40:11.410346 dracut-cmdline[252]: dracut-dracut-053 Jan 20 01:40:11.410346 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=93b7c0065a09ec71bf84c247be021b0de512ae4ddd93f3ff0c2b7b260332752d Jan 20 01:40:11.445663 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:11.472277 systemd-resolved[258]: Positive Trust Anchors: Jan 20 01:40:11.472296 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:40:11.472647 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:40:11.474816 systemd-resolved[258]: Defaulting to hostname 'linux'. Jan 20 01:40:11.475735 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:40:11.482578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:11.542335 kernel: SCSI subsystem initialized Jan 20 01:40:11.550333 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:40:11.562335 kernel: iscsi: registered transport (tcp) Jan 20 01:40:11.576851 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:40:11.576898 kernel: QLogic iSCSI HBA Driver Jan 20 01:40:11.615027 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:40:11.626732 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:40:11.654746 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:40:11.654774 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:40:11.659639 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 01:40:11.722332 kernel: raid6: neonx8 gen() 15804 MB/s Jan 20 01:40:11.725333 kernel: raid6: neonx4 gen() 15692 MB/s Jan 20 01:40:11.744328 kernel: raid6: neonx2 gen() 13277 MB/s Jan 20 01:40:11.764326 kernel: raid6: neonx1 gen() 10475 MB/s Jan 20 01:40:11.783329 kernel: raid6: int64x8 gen() 6981 MB/s Jan 20 01:40:11.802328 kernel: raid6: int64x4 gen() 7362 MB/s Jan 20 01:40:11.822326 kernel: raid6: int64x2 gen() 6146 MB/s Jan 20 01:40:11.844153 kernel: raid6: int64x1 gen() 5072 MB/s Jan 20 01:40:11.844163 kernel: raid6: using algorithm neonx8 gen() 15804 MB/s Jan 20 01:40:11.867142 kernel: raid6: .... xor() 12050 MB/s, rmw enabled Jan 20 01:40:11.867161 kernel: raid6: using neon recovery algorithm Jan 20 01:40:11.875327 kernel: xor: measuring software checksum speed Jan 20 01:40:11.880297 kernel: 8regs : 19173 MB/sec Jan 20 01:40:11.880311 kernel: 32regs : 19674 MB/sec Jan 20 01:40:11.882902 kernel: arm64_neon : 27087 MB/sec Jan 20 01:40:11.885964 kernel: xor: using function: arm64_neon (27087 MB/sec) Jan 20 01:40:11.935588 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:40:11.944978 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:40:11.957506 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:11.977744 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jan 20 01:40:11.980780 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:12.010509 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:40:12.022390 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jan 20 01:40:12.049280 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:40:12.065440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:40:12.103979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:12.122957 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:40:12.139673 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:40:12.148270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:40:12.162641 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:12.175737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:40:12.188555 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:40:12.201775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:40:12.228346 kernel: hv_vmbus: Vmbus version:5.3 Jan 20 01:40:12.231538 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:40:12.235894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:12.261620 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 20 01:40:12.261640 kernel: hv_vmbus: registering driver hv_storvsc Jan 20 01:40:12.261651 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 01:40:12.261661 kernel: scsi host1: storvsc_host_t Jan 20 01:40:12.261810 kernel: scsi host0: storvsc_host_t Jan 20 01:40:12.264110 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:12.279136 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 20 01:40:12.279175 kernel: hv_vmbus: registering driver hid_hyperv Jan 20 01:40:12.279186 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 20 01:40:12.289113 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 01:40:12.302645 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 20 01:40:12.302699 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 20 01:40:12.308625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:12.308878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:12.324042 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:12.341166 kernel: hv_vmbus: registering driver hv_netvsc Jan 20 01:40:12.341203 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 20 01:40:12.343743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:12.358413 kernel: PTP clock support registered Jan 20 01:40:12.368055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:12.396028 kernel: hv_utils: Registering HyperV Utility Driver Jan 20 01:40:12.396051 kernel: hv_vmbus: registering driver hv_utils Jan 20 01:40:12.396069 kernel: hv_utils: Heartbeat IC version 3.0 Jan 20 01:40:12.396078 kernel: hv_utils: Shutdown IC version 3.2 Jan 20 01:40:12.396088 kernel: hv_utils: TimeSync IC version 4.0 Jan 20 01:40:12.660317 systemd-resolved[258]: Clock change detected. Flushing caches. Jan 20 01:40:12.671436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:12.691091 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 20 01:40:12.692892 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:40:12.697257 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 20 01:40:12.697427 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 20 01:40:12.701315 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 20 01:40:12.705367 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 20 01:40:12.705794 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 20 01:40:12.705921 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 20 01:40:12.706854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:12.734213 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:12.734240 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 20 01:40:12.734377 kernel: hv_netvsc 7ced8d89-6ab6-7ced-8d89-6ab67ced8d89 eth0: VF slot 1 added Jan 20 01:40:12.752713 kernel: hv_vmbus: registering driver hv_pci Jan 20 01:40:12.752766 kernel: hv_pci d2e77d55-403f-4737-8e88-2d3fbc45696c: PCI VMBus probing: Using version 0x10004 Jan 20 01:40:12.763166 kernel: hv_pci d2e77d55-403f-4737-8e88-2d3fbc45696c: PCI host bridge to bus 403f:00 Jan 20 01:40:12.763349 kernel: pci_bus 403f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 20 01:40:12.763456 kernel: pci_bus 403f:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 01:40:12.768413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:12.780008 kernel: pci 403f:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 20 01:40:12.786896 kernel: pci 403f:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:40:12.790817 kernel: pci 403f:00:02.0: enabling Extended Tags Jan 20 01:40:12.811607 kernel: pci 403f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 403f:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 20 01:40:12.811694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:12.830566 kernel: pci_bus 403f:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 01:40:12.830757 kernel: pci 403f:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:40:12.871077 kernel: mlx5_core 403f:00:02.0: enabling device (0000 -> 0002) Jan 20 01:40:12.876797 kernel: mlx5_core 403f:00:02.0: firmware version: 16.30.5026 Jan 20 01:40:13.073613 kernel: hv_netvsc 7ced8d89-6ab6-7ced-8d89-6ab67ced8d89 eth0: VF registering: eth1 Jan 20 01:40:13.073939 kernel: mlx5_core 403f:00:02.0 eth1: joined to eth0 Jan 20 01:40:13.079278 kernel: mlx5_core 403f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 20 01:40:13.089802 kernel: mlx5_core 403f:00:02.0 enP16447s1: renamed from eth1 Jan 20 01:40:13.320349 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 20 01:40:13.341057 kernel: BTRFS: device fsid ea3e8495-ec03-40ca-9b09-0f7e2a4e9620 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (495) Jan 20 01:40:13.355232 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 20 01:40:13.373886 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (504) Jan 20 01:40:13.361443 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 20 01:40:13.392947 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:40:13.403717 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 20 01:40:13.418984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:40:13.436586 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:14.447806 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:14.448017 disk-uuid[606]: The operation has completed successfully. Jan 20 01:40:14.525808 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:40:14.527805 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:40:14.557008 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:40:14.567163 sh[719]: Success Jan 20 01:40:14.597155 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 20 01:40:14.866779 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:40:14.875862 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:40:14.884894 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:40:14.913677 kernel: BTRFS info (device dm-0): first mount of filesystem ea3e8495-ec03-40ca-9b09-0f7e2a4e9620 Jan 20 01:40:14.913724 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:14.919817 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 01:40:14.923870 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:40:14.927523 kernel: BTRFS info (device dm-0): using free space tree Jan 20 01:40:15.250167 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:40:15.254218 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:40:15.273033 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:40:15.283256 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:40:15.314446 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:15.314502 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:15.318381 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:15.354815 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:15.363334 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 01:40:15.374803 kernel: BTRFS info (device sda6): last unmount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:15.379377 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:40:15.401030 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:40:15.406262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:40:15.428434 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:40:15.433900 systemd-networkd[900]: lo: Link UP Jan 20 01:40:15.433903 systemd-networkd[900]: lo: Gained carrier Jan 20 01:40:15.435434 systemd-networkd[900]: Enumeration completed Jan 20 01:40:15.435712 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:40:15.443256 systemd[1]: Reached target network.target - Network. Jan 20 01:40:15.445714 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:15.445717 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:40:15.522859 kernel: mlx5_core 403f:00:02.0 enP16447s1: Link up Jan 20 01:40:15.561793 kernel: hv_netvsc 7ced8d89-6ab6-7ced-8d89-6ab67ced8d89 eth0: Data path switched to VF: enP16447s1 Jan 20 01:40:15.562142 systemd-networkd[900]: enP16447s1: Link UP Jan 20 01:40:15.562226 systemd-networkd[900]: eth0: Link UP Jan 20 01:40:15.562345 systemd-networkd[900]: eth0: Gained carrier Jan 20 01:40:15.562354 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:15.571988 systemd-networkd[900]: enP16447s1: Gained carrier Jan 20 01:40:15.588834 systemd-networkd[900]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:40:16.156642 ignition[903]: Ignition 2.19.0 Jan 20 01:40:16.156653 ignition[903]: Stage: fetch-offline Jan 20 01:40:16.162809 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:40:16.156687 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:16.156695 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:16.156779 ignition[903]: parsed url from cmdline: "" Jan 20 01:40:16.156812 ignition[903]: no config URL provided Jan 20 01:40:16.156817 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:40:16.183027 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 01:40:16.156824 ignition[903]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:40:16.156829 ignition[903]: failed to fetch config: resource requires networking Jan 20 01:40:16.157280 ignition[903]: Ignition finished successfully Jan 20 01:40:16.206479 ignition[912]: Ignition 2.19.0 Jan 20 01:40:16.206486 ignition[912]: Stage: fetch Jan 20 01:40:16.206671 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:16.206681 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:16.206772 ignition[912]: parsed url from cmdline: "" Jan 20 01:40:16.206775 ignition[912]: no config URL provided Jan 20 01:40:16.206793 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:40:16.206803 ignition[912]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:40:16.206826 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 20 01:40:16.312606 ignition[912]: GET result: OK Jan 20 01:40:16.312679 ignition[912]: config has been read from IMDS userdata Jan 20 01:40:16.312722 ignition[912]: parsing config with SHA512: 4ad4fbca560d13d1ca933f68c41fd41715187ed50c4160625b02440f501d6828f6fad4d2c2bd7e348146f3f79727d7758f16f7989e1ecd06948bdaff58ca00f5 Jan 20 01:40:16.316305 unknown[912]: fetched base config from "system" Jan 20 01:40:16.316674 ignition[912]: fetch: fetch complete Jan 20 01:40:16.316312 unknown[912]: fetched base config from "system" Jan 20 01:40:16.316678 ignition[912]: fetch: fetch passed Jan 20 01:40:16.316318 unknown[912]: fetched user config from "azure" Jan 20 01:40:16.316716 ignition[912]: Ignition finished successfully Jan 20 01:40:16.321063 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 01:40:16.339035 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:40:16.360288 ignition[918]: Ignition 2.19.0 Jan 20 01:40:16.364156 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:40:16.360295 ignition[918]: Stage: kargs Jan 20 01:40:16.360488 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:16.360496 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:16.361830 ignition[918]: kargs: kargs passed Jan 20 01:40:16.383033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:40:16.361886 ignition[918]: Ignition finished successfully Jan 20 01:40:16.409428 ignition[924]: Ignition 2.19.0 Jan 20 01:40:16.409443 ignition[924]: Stage: disks Jan 20 01:40:16.409652 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:16.415793 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:40:16.409661 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:16.420863 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:40:16.413142 ignition[924]: disks: disks passed Jan 20 01:40:16.429618 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:40:16.413196 ignition[924]: Ignition finished successfully Jan 20 01:40:16.440763 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:40:16.450485 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:40:16.460906 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:40:16.487114 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:40:16.659776 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 20 01:40:16.668156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:40:16.681950 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:40:16.735819 kernel: EXT4-fs (sda9): mounted filesystem c6ba54f7-cbb1-463d-980b-a8c197f00e73 r/w with ordered data mode. Quota mode: none. Jan 20 01:40:16.735562 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:40:16.739480 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:40:16.780879 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:40:16.798792 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Jan 20 01:40:16.806911 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:40:16.825673 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:16.825704 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:16.825714 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:16.825724 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:16.832906 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 01:40:16.846212 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:40:16.846243 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:40:16.854383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:40:16.869708 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:40:16.889330 systemd-networkd[900]: eth0: Gained IPv6LL Jan 20 01:40:16.890044 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:40:17.494685 coreos-metadata[960]: Jan 20 01:40:17.494 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:40:17.501004 coreos-metadata[960]: Jan 20 01:40:17.500 INFO Fetch successful Jan 20 01:40:17.501004 coreos-metadata[960]: Jan 20 01:40:17.500 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:40:17.515823 coreos-metadata[960]: Jan 20 01:40:17.515 INFO Fetch successful Jan 20 01:40:17.532016 coreos-metadata[960]: Jan 20 01:40:17.531 INFO wrote hostname ci-4081.3.6-n-e5d82fe73a to /sysroot/etc/hostname Jan 20 01:40:17.539883 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:40:17.623619 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:40:17.773138 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:40:17.795273 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:40:17.803424 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:40:18.517487 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:40:18.529970 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:40:18.535909 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:40:18.556959 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:40:18.560795 kernel: BTRFS info (device sda6): last unmount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:18.583585 ignition[1060]: INFO : Ignition 2.19.0 Jan 20 01:40:18.587955 ignition[1060]: INFO : Stage: mount Jan 20 01:40:18.587955 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:18.587955 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:18.587955 ignition[1060]: INFO : mount: mount passed Jan 20 01:40:18.587955 ignition[1060]: INFO : Ignition finished successfully Jan 20 01:40:18.592265 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:40:18.601080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:40:18.621032 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:40:18.635982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:40:18.656806 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1073) Jan 20 01:40:18.668478 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:18.668515 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:18.672556 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:18.678793 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:18.680710 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:40:18.705939 ignition[1091]: INFO : Ignition 2.19.0 Jan 20 01:40:18.709769 ignition[1091]: INFO : Stage: files Jan 20 01:40:18.709769 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:18.709769 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:18.709769 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:40:18.726578 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:40:18.726578 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:40:18.803323 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:40:18.809407 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:40:18.809407 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:40:18.803697 unknown[1091]: wrote ssh authorized keys file for user: core Jan 20 01:40:18.825022 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 01:40:18.825022 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 20 01:40:18.860582 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:40:18.961131 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 20 01:40:19.545495 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:40:19.904593 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 01:40:19.904593 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:40:19.933172 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: files passed Jan 20 01:40:19.943258 ignition[1091]: INFO : Ignition finished successfully Jan 20 01:40:19.944212 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:40:19.970308 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:40:19.981963 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:40:20.029496 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:20.029496 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:20.000343 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:40:20.056545 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:20.000449 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:40:20.027809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:40:20.035830 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:40:20.072898 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:40:20.099171 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:40:20.099294 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:40:20.109456 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:40:20.118359 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:40:20.126624 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:40:20.138221 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:40:20.157259 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:40:20.172059 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:40:20.186812 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:20.191943 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:20.201535 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:40:20.210953 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:40:20.211080 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:40:20.224258 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:40:20.233225 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:40:20.241625 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:40:20.250694 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:40:20.261158 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:40:20.270828 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:40:20.280055 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:40:20.289293 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:40:20.298713 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:40:20.307196 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:40:20.314354 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:40:20.314528 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:40:20.326366 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:20.335071 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:20.344323 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:40:20.344426 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:20.354764 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:40:20.354938 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:40:20.368449 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:40:20.368614 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:40:20.377192 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:40:20.377342 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:40:20.385580 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 01:40:20.385724 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:40:20.410879 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:40:20.433325 ignition[1142]: INFO : Ignition 2.19.0 Jan 20 01:40:20.433325 ignition[1142]: INFO : Stage: umount Jan 20 01:40:20.433325 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:20.433325 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:20.433325 ignition[1142]: INFO : umount: umount passed Jan 20 01:40:20.433325 ignition[1142]: INFO : Ignition finished successfully Jan 20 01:40:20.428869 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:40:20.436739 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:40:20.436923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:20.445588 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:40:20.445691 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:40:20.463960 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:40:20.464670 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:40:20.464773 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:40:20.479701 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:40:20.479769 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:40:20.485557 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:40:20.485606 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:40:20.493868 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 01:40:20.493905 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 01:40:20.502327 systemd[1]: Stopped target network.target - Network. Jan 20 01:40:20.510976 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:40:20.511036 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:40:20.520725 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:40:20.527582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:40:20.535803 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:20.543911 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:40:20.551476 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:40:20.564214 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:40:20.564266 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:40:20.572628 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:40:20.572665 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:40:20.580883 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:40:20.580930 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:40:20.588304 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:40:20.588336 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:40:20.596657 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:40:20.605172 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:40:20.612528 systemd-networkd[900]: eth0: DHCPv6 lease lost Jan 20 01:40:20.613981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:40:20.614059 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:40:20.632832 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:40:20.632957 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:40:20.641897 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:40:20.642021 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:40:20.649669 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:40:20.649718 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:20.669988 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:40:20.677692 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:40:20.801890 kernel: hv_netvsc 7ced8d89-6ab6-7ced-8d89-6ab67ced8d89 eth0: Data path switched from VF: enP16447s1 Jan 20 01:40:20.677768 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:40:20.686694 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:40:20.686735 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:20.695183 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:40:20.695220 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:20.703770 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:40:20.703832 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:20.714219 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:20.743126 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:40:20.743284 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:20.751687 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:40:20.751771 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:20.761090 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:40:20.761126 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:20.770068 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:40:20.770114 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:40:20.783386 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:40:20.783437 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:40:20.801949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:40:20.802009 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:20.823032 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:40:20.831059 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:40:20.831203 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:20.843136 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 01:40:20.843187 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:40:20.852436 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:40:20.852511 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:20.862526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:20.862564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:20.872932 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:40:20.875094 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:40:20.891667 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:40:20.891802 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:40:20.969508 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:40:20.969628 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:40:20.976293 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:40:20.984268 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:40:20.984384 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:40:21.011038 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:40:21.120872 systemd[1]: Switching root. Jan 20 01:40:21.218722 systemd-journald[217]: Journal stopped Jan 20 01:40:11.182063 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 20 01:40:11.182086 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 19 23:25:42 -00 2026 Jan 20 01:40:11.182094 kernel: KASLR enabled Jan 20 01:40:11.182100 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 20 01:40:11.182107 kernel: printk: bootconsole [pl11] enabled Jan 20 01:40:11.182112 kernel: efi: EFI v2.7 by EDK II Jan 20 01:40:11.182120 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 20 01:40:11.182126 kernel: random: crng init done Jan 20 01:40:11.182132 kernel: ACPI: Early table checksum verification disabled Jan 20 01:40:11.182138 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 20 01:40:11.182144 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182150 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182158 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 20 01:40:11.182164 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182171 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182177 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182184 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182192 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182198 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182205 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 20 01:40:11.182211 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:11.182217 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 20 01:40:11.182224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 20 01:40:11.182230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 20 01:40:11.182236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 20 01:40:11.182242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 20 01:40:11.182249 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 20 01:40:11.182255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 20 01:40:11.182263 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 20 01:40:11.182269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 20 01:40:11.182276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 20 01:40:11.182282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 20 01:40:11.182288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 20 01:40:11.182295 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 20 01:40:11.182301 kernel: NUMA: NODE_DATA [mem 0x1bf7f1800-0x1bf7f6fff] Jan 20 01:40:11.182307 kernel: Zone ranges: Jan 20 01:40:11.182314 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 20 01:40:11.182340 kernel: DMA32 empty Jan 20 01:40:11.182347 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:40:11.182353 kernel: Movable zone start for each node Jan 20 01:40:11.182364 kernel: Early memory node ranges Jan 20 01:40:11.182371 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 20 01:40:11.182378 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 20 01:40:11.182385 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 20 01:40:11.182391 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 20 01:40:11.182399 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 20 01:40:11.182406 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 20 01:40:11.182413 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:40:11.182420 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 20 01:40:11.182427 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 20 01:40:11.182433 kernel: psci: probing for conduit method from ACPI. Jan 20 01:40:11.182440 kernel: psci: PSCIv1.1 detected in firmware. Jan 20 01:40:11.182447 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 01:40:11.182453 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 20 01:40:11.182460 kernel: psci: SMC Calling Convention v1.4 Jan 20 01:40:11.182467 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 20 01:40:11.182473 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 20 01:40:11.182481 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 20 01:40:11.182488 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 20 01:40:11.182495 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 01:40:11.182502 kernel: Detected PIPT I-cache on CPU0 Jan 20 01:40:11.182508 kernel: CPU features: detected: GIC system register CPU interface Jan 20 01:40:11.182515 kernel: CPU features: detected: Hardware dirty bit management Jan 20 01:40:11.182522 kernel: CPU features: detected: Spectre-BHB Jan 20 01:40:11.182528 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 01:40:11.182535 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 01:40:11.182542 kernel: CPU features: detected: ARM erratum 1418040 Jan 20 01:40:11.182549 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 20 01:40:11.182557 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 01:40:11.182563 kernel: alternatives: applying boot alternatives Jan 20 01:40:11.182572 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=93b7c0065a09ec71bf84c247be021b0de512ae4ddd93f3ff0c2b7b260332752d Jan 20 01:40:11.182579 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:40:11.182586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:40:11.182592 kernel: Fallback order for Node 0: 0 Jan 20 01:40:11.182599 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 20 01:40:11.182605 kernel: Policy zone: Normal Jan 20 01:40:11.182612 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:40:11.182619 kernel: software IO TLB: area num 2. Jan 20 01:40:11.182626 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 20 01:40:11.182634 kernel: Memory: 3982644K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211516K reserved, 0K cma-reserved) Jan 20 01:40:11.182641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 01:40:11.182648 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:40:11.182655 kernel: rcu: RCU event tracing is enabled. Jan 20 01:40:11.182662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 01:40:11.182669 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:40:11.182676 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:40:11.182683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:40:11.182689 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 01:40:11.182696 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 01:40:11.182703 kernel: GICv3: 960 SPIs implemented Jan 20 01:40:11.182711 kernel: GICv3: 0 Extended SPIs implemented Jan 20 01:40:11.182717 kernel: Root IRQ handler: gic_handle_irq Jan 20 01:40:11.182724 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 20 01:40:11.182731 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 20 01:40:11.182737 kernel: ITS: No ITS available, not enabling LPIs Jan 20 01:40:11.182744 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:40:11.182751 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 01:40:11.182758 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 20 01:40:11.182764 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 20 01:40:11.182771 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 20 01:40:11.182778 kernel: Console: colour dummy device 80x25 Jan 20 01:40:11.182787 kernel: printk: console [tty1] enabled Jan 20 01:40:11.182794 kernel: ACPI: Core revision 20230628 Jan 20 01:40:11.182801 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 20 01:40:11.182808 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:40:11.182815 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 01:40:11.182822 kernel: landlock: Up and running. Jan 20 01:40:11.182829 kernel: SELinux: Initializing. Jan 20 01:40:11.182836 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:40:11.182842 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:40:11.182851 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:40:11.182858 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:40:11.182865 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 20 01:40:11.182872 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 20 01:40:11.182879 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 20 01:40:11.182886 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:40:11.182893 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:40:11.182900 kernel: Remapping and enabling EFI services. Jan 20 01:40:11.182913 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:40:11.182920 kernel: Detected PIPT I-cache on CPU1 Jan 20 01:40:11.182927 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 20 01:40:11.182935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 01:40:11.182944 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 20 01:40:11.182951 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 01:40:11.182959 kernel: SMP: Total of 2 processors activated. Jan 20 01:40:11.182966 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 01:40:11.182973 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 20 01:40:11.182982 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 01:40:11.182990 kernel: CPU features: detected: CRC32 instructions Jan 20 01:40:11.182998 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 01:40:11.183005 kernel: CPU features: detected: LSE atomic instructions Jan 20 01:40:11.183012 kernel: CPU features: detected: Privileged Access Never Jan 20 01:40:11.183019 kernel: CPU: All CPU(s) started at EL1 Jan 20 01:40:11.183026 kernel: alternatives: applying system-wide alternatives Jan 20 01:40:11.183034 kernel: devtmpfs: initialized Jan 20 01:40:11.183041 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:40:11.183050 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 01:40:11.183057 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:40:11.183065 kernel: SMBIOS 3.1.0 present. Jan 20 01:40:11.183072 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 20 01:40:11.183079 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:40:11.183087 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 01:40:11.183094 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 01:40:11.183102 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 01:40:11.183109 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:40:11.183117 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 20 01:40:11.183125 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:40:11.183132 kernel: cpuidle: using governor menu Jan 20 01:40:11.183140 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 01:40:11.183147 kernel: ASID allocator initialised with 32768 entries Jan 20 01:40:11.183154 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:40:11.183161 kernel: Serial: AMBA PL011 UART driver Jan 20 01:40:11.183169 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 01:40:11.183176 kernel: Modules: 0 pages in range for non-PLT usage Jan 20 01:40:11.183185 kernel: Modules: 509008 pages in range for PLT usage Jan 20 01:40:11.183192 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:40:11.183199 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:40:11.183206 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 01:40:11.183214 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 01:40:11.183221 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:40:11.183228 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:40:11.183236 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 01:40:11.183243 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 01:40:11.183252 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:40:11.183259 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:40:11.183267 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:40:11.183274 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:40:11.183281 kernel: ACPI: Interpreter enabled Jan 20 01:40:11.183288 kernel: ACPI: Using GIC for interrupt routing Jan 20 01:40:11.183296 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 20 01:40:11.183303 kernel: printk: console [ttyAMA0] enabled Jan 20 01:40:11.183311 kernel: printk: bootconsole [pl11] disabled Jan 20 01:40:11.183324 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 20 01:40:11.183332 kernel: iommu: Default domain type: Translated Jan 20 01:40:11.183339 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 01:40:11.183346 kernel: efivars: Registered efivars operations Jan 20 01:40:11.183354 kernel: vgaarb: loaded Jan 20 01:40:11.183361 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 01:40:11.183368 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:40:11.183375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:40:11.183383 kernel: pnp: PnP ACPI init Jan 20 01:40:11.183392 kernel: pnp: PnP ACPI: found 0 devices Jan 20 01:40:11.183399 kernel: NET: Registered PF_INET protocol family Jan 20 01:40:11.183406 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:40:11.183414 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:40:11.183421 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:40:11.183429 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:40:11.183436 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:40:11.183443 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:40:11.183451 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:40:11.183459 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:40:11.183467 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:40:11.183474 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:40:11.183481 kernel: kvm [1]: HYP mode not available Jan 20 01:40:11.183488 kernel: Initialise system trusted keyrings Jan 20 01:40:11.183495 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:40:11.183503 kernel: Key type asymmetric registered Jan 20 01:40:11.183510 kernel: Asymmetric key parser 'x509' registered Jan 20 01:40:11.183517 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:40:11.183525 kernel: io scheduler mq-deadline registered Jan 20 01:40:11.183533 kernel: io scheduler kyber registered Jan 20 01:40:11.183540 kernel: io scheduler bfq registered Jan 20 01:40:11.183547 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:40:11.183554 kernel: thunder_xcv, ver 1.0 Jan 20 01:40:11.183562 kernel: thunder_bgx, ver 1.0 Jan 20 01:40:11.183569 kernel: nicpf, ver 1.0 Jan 20 01:40:11.183576 kernel: nicvf, ver 1.0 Jan 20 01:40:11.183703 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 01:40:11.183774 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T01:40:10 UTC (1768873210) Jan 20 01:40:11.183785 kernel: efifb: probing for efifb Jan 20 01:40:11.183792 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 20 01:40:11.183799 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 20 01:40:11.183807 kernel: efifb: scrolling: redraw Jan 20 01:40:11.183814 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 01:40:11.183821 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:40:11.183829 kernel: fb0: EFI VGA frame buffer device Jan 20 01:40:11.183837 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 20 01:40:11.183845 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 01:40:11.183852 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 20 01:40:11.183859 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 20 01:40:11.183867 kernel: watchdog: Hard watchdog permanently disabled Jan 20 01:40:11.183874 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:40:11.183881 kernel: Segment Routing with IPv6 Jan 20 01:40:11.183888 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:40:11.183896 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:40:11.183904 kernel: Key type dns_resolver registered Jan 20 01:40:11.183912 kernel: registered taskstats version 1 Jan 20 01:40:11.183919 kernel: Loading compiled-in X.509 certificates Jan 20 01:40:11.183926 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 78d001f5b2e422df1e406698b80c7183ecdd19cf' Jan 20 01:40:11.183933 kernel: Key type .fscrypt registered Jan 20 01:40:11.183940 kernel: Key type fscrypt-provisioning registered Jan 20 01:40:11.183947 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:40:11.183955 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:40:11.183962 kernel: ima: No architecture policies found Jan 20 01:40:11.183971 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 01:40:11.183978 kernel: clk: Disabling unused clocks Jan 20 01:40:11.183985 kernel: Freeing unused kernel memory: 39424K Jan 20 01:40:11.183992 kernel: Run /init as init process Jan 20 01:40:11.183999 kernel: with arguments: Jan 20 01:40:11.184006 kernel: /init Jan 20 01:40:11.184013 kernel: with environment: Jan 20 01:40:11.184021 kernel: HOME=/ Jan 20 01:40:11.184028 kernel: TERM=linux Jan 20 01:40:11.184037 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:40:11.184048 systemd[1]: Detected virtualization microsoft. Jan 20 01:40:11.184056 systemd[1]: Detected architecture arm64. Jan 20 01:40:11.184063 systemd[1]: Running in initrd. Jan 20 01:40:11.184071 systemd[1]: No hostname configured, using default hostname. Jan 20 01:40:11.184078 systemd[1]: Hostname set to . Jan 20 01:40:11.184087 systemd[1]: Initializing machine ID from random generator. Jan 20 01:40:11.184096 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:40:11.184104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:11.184112 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:11.184120 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:40:11.184128 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:40:11.184136 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:40:11.184144 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:40:11.184153 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:40:11.184163 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:40:11.184171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:11.184179 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:11.184186 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:40:11.184194 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:40:11.184202 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:40:11.184210 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:40:11.184218 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:40:11.184227 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:40:11.184235 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:40:11.184243 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 01:40:11.184250 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:11.184258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:11.184266 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:11.184274 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:40:11.184282 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:40:11.184291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:40:11.184299 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:40:11.184307 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:40:11.184314 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:40:11.184328 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:40:11.184350 systemd-journald[217]: Collecting audit messages is disabled. Jan 20 01:40:11.184371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:11.184379 systemd-journald[217]: Journal started Jan 20 01:40:11.184398 systemd-journald[217]: Runtime Journal (/run/log/journal/a39a58b1e84f42aa82db316d7115f6c6) is 8.0M, max 78.5M, 70.5M free. Jan 20 01:40:11.185424 systemd-modules-load[218]: Inserted module 'overlay' Jan 20 01:40:11.200860 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:40:11.209333 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:40:11.211349 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:40:11.222876 kernel: Bridge firewalling registered Jan 20 01:40:11.218059 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 20 01:40:11.218869 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:11.228750 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:40:11.236029 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:11.246244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:11.269634 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:11.281469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:40:11.294335 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:40:11.310406 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:40:11.322422 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:11.328377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:11.339634 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:40:11.349742 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:11.372544 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:40:11.383759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:40:11.392481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:40:11.410346 dracut-cmdline[252]: dracut-dracut-053 Jan 20 01:40:11.410346 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=93b7c0065a09ec71bf84c247be021b0de512ae4ddd93f3ff0c2b7b260332752d Jan 20 01:40:11.445663 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:11.472277 systemd-resolved[258]: Positive Trust Anchors: Jan 20 01:40:11.472296 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:40:11.472647 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:40:11.474816 systemd-resolved[258]: Defaulting to hostname 'linux'. Jan 20 01:40:11.475735 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:40:11.482578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:11.542335 kernel: SCSI subsystem initialized Jan 20 01:40:11.550333 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:40:11.562335 kernel: iscsi: registered transport (tcp) Jan 20 01:40:11.576851 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:40:11.576898 kernel: QLogic iSCSI HBA Driver Jan 20 01:40:11.615027 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:40:11.626732 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:40:11.654746 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:40:11.654774 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:40:11.659639 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 01:40:11.722332 kernel: raid6: neonx8 gen() 15804 MB/s Jan 20 01:40:11.725333 kernel: raid6: neonx4 gen() 15692 MB/s Jan 20 01:40:11.744328 kernel: raid6: neonx2 gen() 13277 MB/s Jan 20 01:40:11.764326 kernel: raid6: neonx1 gen() 10475 MB/s Jan 20 01:40:11.783329 kernel: raid6: int64x8 gen() 6981 MB/s Jan 20 01:40:11.802328 kernel: raid6: int64x4 gen() 7362 MB/s Jan 20 01:40:11.822326 kernel: raid6: int64x2 gen() 6146 MB/s Jan 20 01:40:11.844153 kernel: raid6: int64x1 gen() 5072 MB/s Jan 20 01:40:11.844163 kernel: raid6: using algorithm neonx8 gen() 15804 MB/s Jan 20 01:40:11.867142 kernel: raid6: .... xor() 12050 MB/s, rmw enabled Jan 20 01:40:11.867161 kernel: raid6: using neon recovery algorithm Jan 20 01:40:11.875327 kernel: xor: measuring software checksum speed Jan 20 01:40:11.880297 kernel: 8regs : 19173 MB/sec Jan 20 01:40:11.880311 kernel: 32regs : 19674 MB/sec Jan 20 01:40:11.882902 kernel: arm64_neon : 27087 MB/sec Jan 20 01:40:11.885964 kernel: xor: using function: arm64_neon (27087 MB/sec) Jan 20 01:40:11.935588 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:40:11.944978 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:40:11.957506 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:11.977744 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jan 20 01:40:11.980780 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:12.010509 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:40:12.022390 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jan 20 01:40:12.049280 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:40:12.065440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:40:12.103979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:12.122957 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:40:12.139673 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:40:12.148270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:40:12.162641 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:12.175737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:40:12.188555 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:40:12.201775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:40:12.228346 kernel: hv_vmbus: Vmbus version:5.3 Jan 20 01:40:12.231538 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:40:12.235894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:12.261620 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 20 01:40:12.261640 kernel: hv_vmbus: registering driver hv_storvsc Jan 20 01:40:12.261651 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 01:40:12.261661 kernel: scsi host1: storvsc_host_t Jan 20 01:40:12.261810 kernel: scsi host0: storvsc_host_t Jan 20 01:40:12.264110 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:12.279136 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 20 01:40:12.279175 kernel: hv_vmbus: registering driver hid_hyperv Jan 20 01:40:12.279186 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 20 01:40:12.289113 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 01:40:12.302645 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 20 01:40:12.302699 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 20 01:40:12.308625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:12.308878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:12.324042 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:12.341166 kernel: hv_vmbus: registering driver hv_netvsc Jan 20 01:40:12.341203 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 20 01:40:12.343743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:12.358413 kernel: PTP clock support registered Jan 20 01:40:12.368055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:12.396028 kernel: hv_utils: Registering HyperV Utility Driver Jan 20 01:40:12.396051 kernel: hv_vmbus: registering driver hv_utils Jan 20 01:40:12.396069 kernel: hv_utils: Heartbeat IC version 3.0 Jan 20 01:40:12.396078 kernel: hv_utils: Shutdown IC version 3.2 Jan 20 01:40:12.396088 kernel: hv_utils: TimeSync IC version 4.0 Jan 20 01:40:12.660317 systemd-resolved[258]: Clock change detected. Flushing caches. Jan 20 01:40:12.671436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:12.691091 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 20 01:40:12.692892 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:40:12.697257 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 20 01:40:12.697427 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 20 01:40:12.701315 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 20 01:40:12.705367 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 20 01:40:12.705794 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 20 01:40:12.705921 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 20 01:40:12.706854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:12.734213 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:12.734240 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 20 01:40:12.734377 kernel: hv_netvsc 7ced8d89-6ab6-7ced-8d89-6ab67ced8d89 eth0: VF slot 1 added Jan 20 01:40:12.752713 kernel: hv_vmbus: registering driver hv_pci Jan 20 01:40:12.752766 kernel: hv_pci d2e77d55-403f-4737-8e88-2d3fbc45696c: PCI VMBus probing: Using version 0x10004 Jan 20 01:40:12.763166 kernel: hv_pci d2e77d55-403f-4737-8e88-2d3fbc45696c: PCI host bridge to bus 403f:00 Jan 20 01:40:12.763349 kernel: pci_bus 403f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 20 01:40:12.763456 kernel: pci_bus 403f:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 01:40:12.768413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:12.780008 kernel: pci 403f:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 20 01:40:12.786896 kernel: pci 403f:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:40:12.790817 kernel: pci 403f:00:02.0: enabling Extended Tags Jan 20 01:40:12.811607 kernel: pci 403f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 403f:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 20 01:40:12.811694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:12.830566 kernel: pci_bus 403f:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 01:40:12.830757 kernel: pci 403f:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:40:12.871077 kernel: mlx5_core 403f:00:02.0: enabling device (0000 -> 0002) Jan 20 01:40:12.876797 kernel: mlx5_core 403f:00:02.0: firmware version: 16.30.5026 Jan 20 01:40:13.073613 kernel: hv_netvsc 7ced8d89-6ab6-7ced-8d89-6ab67ced8d89 eth0: VF registering: eth1 Jan 20 01:40:13.073939 kernel: mlx5_core 403f:00:02.0 eth1: joined to eth0 Jan 20 01:40:13.079278 kernel: mlx5_core 403f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 20 01:40:13.089802 kernel: mlx5_core 403f:00:02.0 enP16447s1: renamed from eth1 Jan 20 01:40:13.320349 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 20 01:40:13.341057 kernel: BTRFS: device fsid ea3e8495-ec03-40ca-9b09-0f7e2a4e9620 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (495) Jan 20 01:40:13.355232 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 20 01:40:13.373886 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (504) Jan 20 01:40:13.361443 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 20 01:40:13.392947 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:40:13.403717 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 20 01:40:13.418984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:40:13.436586 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:14.447806 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:14.448017 disk-uuid[606]: The operation has completed successfully. Jan 20 01:40:14.525808 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:40:14.527805 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:40:14.557008 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:40:14.567163 sh[719]: Success Jan 20 01:40:14.597155 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 20 01:40:14.866779 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:40:14.875862 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:40:14.884894 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:40:14.913677 kernel: BTRFS info (device dm-0): first mount of filesystem ea3e8495-ec03-40ca-9b09-0f7e2a4e9620 Jan 20 01:40:14.913724 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:14.919817 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 01:40:14.923870 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:40:14.927523 kernel: BTRFS info (device dm-0): using free space tree Jan 20 01:40:15.250167 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:40:15.254218 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:40:15.273033 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:40:15.283256 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:40:15.314446 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:15.314502 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:15.318381 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:15.354815 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:15.363334 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 01:40:15.374803 kernel: BTRFS info (device sda6): last unmount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:15.379377 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:40:15.401030 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:40:15.406262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:40:15.428434 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:40:15.433900 systemd-networkd[900]: lo: Link UP Jan 20 01:40:15.433903 systemd-networkd[900]: lo: Gained carrier Jan 20 01:40:15.435434 systemd-networkd[900]: Enumeration completed Jan 20 01:40:15.435712 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:40:15.443256 systemd[1]: Reached target network.target - Network. Jan 20 01:40:15.445714 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:15.445717 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:40:15.522859 kernel: mlx5_core 403f:00:02.0 enP16447s1: Link up Jan 20 01:40:15.561793 kernel: hv_netvsc 7ced8d89-6ab6-7ced-8d89-6ab67ced8d89 eth0: Data path switched to VF: enP16447s1 Jan 20 01:40:15.562142 systemd-networkd[900]: enP16447s1: Link UP Jan 20 01:40:15.562226 systemd-networkd[900]: eth0: Link UP Jan 20 01:40:15.562345 systemd-networkd[900]: eth0: Gained carrier Jan 20 01:40:15.562354 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:15.571988 systemd-networkd[900]: enP16447s1: Gained carrier Jan 20 01:40:15.588834 systemd-networkd[900]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:40:16.156642 ignition[903]: Ignition 2.19.0 Jan 20 01:40:16.156653 ignition[903]: Stage: fetch-offline Jan 20 01:40:16.162809 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:40:16.156687 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:16.156695 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:16.156779 ignition[903]: parsed url from cmdline: "" Jan 20 01:40:16.156812 ignition[903]: no config URL provided Jan 20 01:40:16.156817 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:40:16.183027 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 01:40:16.156824 ignition[903]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:40:16.156829 ignition[903]: failed to fetch config: resource requires networking Jan 20 01:40:16.157280 ignition[903]: Ignition finished successfully Jan 20 01:40:16.206479 ignition[912]: Ignition 2.19.0 Jan 20 01:40:16.206486 ignition[912]: Stage: fetch Jan 20 01:40:16.206671 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:16.206681 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:16.206772 ignition[912]: parsed url from cmdline: "" Jan 20 01:40:16.206775 ignition[912]: no config URL provided Jan 20 01:40:16.206793 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:40:16.206803 ignition[912]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:40:16.206826 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 20 01:40:16.312606 ignition[912]: GET result: OK Jan 20 01:40:16.312679 ignition[912]: config has been read from IMDS userdata Jan 20 01:40:16.312722 ignition[912]: parsing config with SHA512: 4ad4fbca560d13d1ca933f68c41fd41715187ed50c4160625b02440f501d6828f6fad4d2c2bd7e348146f3f79727d7758f16f7989e1ecd06948bdaff58ca00f5 Jan 20 01:40:16.316305 unknown[912]: fetched base config from "system" Jan 20 01:40:16.316674 ignition[912]: fetch: fetch complete Jan 20 01:40:16.316312 unknown[912]: fetched base config from "system" Jan 20 01:40:16.316678 ignition[912]: fetch: fetch passed Jan 20 01:40:16.316318 unknown[912]: fetched user config from "azure" Jan 20 01:40:16.316716 ignition[912]: Ignition finished successfully Jan 20 01:40:16.321063 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 01:40:16.339035 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:40:16.360288 ignition[918]: Ignition 2.19.0 Jan 20 01:40:16.364156 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:40:16.360295 ignition[918]: Stage: kargs Jan 20 01:40:16.360488 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:16.360496 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:16.361830 ignition[918]: kargs: kargs passed Jan 20 01:40:16.383033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:40:16.361886 ignition[918]: Ignition finished successfully Jan 20 01:40:16.409428 ignition[924]: Ignition 2.19.0 Jan 20 01:40:16.409443 ignition[924]: Stage: disks Jan 20 01:40:16.409652 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:16.415793 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:40:16.409661 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:16.420863 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:40:16.413142 ignition[924]: disks: disks passed Jan 20 01:40:16.429618 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:40:16.413196 ignition[924]: Ignition finished successfully Jan 20 01:40:16.440763 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:40:16.450485 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:40:16.460906 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:40:16.487114 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:40:16.659776 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 20 01:40:16.668156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:40:16.681950 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:40:16.735819 kernel: EXT4-fs (sda9): mounted filesystem c6ba54f7-cbb1-463d-980b-a8c197f00e73 r/w with ordered data mode. Quota mode: none. Jan 20 01:40:16.735562 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:40:16.739480 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:40:16.780879 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:40:16.798792 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Jan 20 01:40:16.806911 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:40:16.825673 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:16.825704 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:16.825714 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:16.825724 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:16.832906 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 01:40:16.846212 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:40:16.846243 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:40:16.854383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:40:16.869708 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:40:16.889330 systemd-networkd[900]: eth0: Gained IPv6LL Jan 20 01:40:16.890044 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:40:17.494685 coreos-metadata[960]: Jan 20 01:40:17.494 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:40:17.501004 coreos-metadata[960]: Jan 20 01:40:17.500 INFO Fetch successful Jan 20 01:40:17.501004 coreos-metadata[960]: Jan 20 01:40:17.500 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:40:17.515823 coreos-metadata[960]: Jan 20 01:40:17.515 INFO Fetch successful Jan 20 01:40:17.532016 coreos-metadata[960]: Jan 20 01:40:17.531 INFO wrote hostname ci-4081.3.6-n-e5d82fe73a to /sysroot/etc/hostname Jan 20 01:40:17.539883 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:40:17.623619 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:40:17.773138 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:40:17.795273 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:40:17.803424 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:40:18.517487 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:40:18.529970 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:40:18.535909 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:40:18.556959 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:40:18.560795 kernel: BTRFS info (device sda6): last unmount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:18.583585 ignition[1060]: INFO : Ignition 2.19.0 Jan 20 01:40:18.587955 ignition[1060]: INFO : Stage: mount Jan 20 01:40:18.587955 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:18.587955 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:18.587955 ignition[1060]: INFO : mount: mount passed Jan 20 01:40:18.587955 ignition[1060]: INFO : Ignition finished successfully Jan 20 01:40:18.592265 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:40:18.601080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:40:18.621032 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:40:18.635982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:40:18.656806 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1073) Jan 20 01:40:18.668478 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:18.668515 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:18.672556 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:18.678793 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:18.680710 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:40:18.705939 ignition[1091]: INFO : Ignition 2.19.0 Jan 20 01:40:18.709769 ignition[1091]: INFO : Stage: files Jan 20 01:40:18.709769 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:18.709769 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:18.709769 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:40:18.726578 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:40:18.726578 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:40:18.803323 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:40:18.809407 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:40:18.809407 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:40:18.803697 unknown[1091]: wrote ssh authorized keys file for user: core Jan 20 01:40:18.825022 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 01:40:18.825022 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 20 01:40:18.860582 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:40:18.961131 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 01:40:18.970072 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 20 01:40:19.545495 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:40:19.904593 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 01:40:19.904593 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:40:19.933172 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:40:19.943258 ignition[1091]: INFO : files: files passed Jan 20 01:40:19.943258 ignition[1091]: INFO : Ignition finished successfully Jan 20 01:40:19.944212 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:40:19.970308 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:40:19.981963 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:40:20.029496 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:20.029496 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:20.000343 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:40:20.056545 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:20.000449 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:40:20.027809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:40:20.035830 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:40:20.072898 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:40:20.099171 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:40:20.099294 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:40:20.109456 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:40:20.118359 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:40:20.126624 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:40:20.138221 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:40:20.157259 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:40:20.172059 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:40:20.186812 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:20.191943 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:20.201535 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:40:20.210953 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:40:20.211080 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:40:20.224258 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:40:20.233225 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:40:20.241625 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:40:20.250694 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:40:20.261158 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:40:20.270828 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:40:20.280055 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:40:20.289293 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:40:20.298713 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:40:20.307196 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:40:20.314354 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:40:20.314528 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:40:20.326366 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:20.335071 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:20.344323 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:40:20.344426 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:20.354764 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:40:20.354938 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:40:20.368449 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:40:20.368614 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:40:20.377192 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:40:20.377342 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:40:20.385580 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 01:40:20.385724 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:40:20.410879 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:40:20.433325 ignition[1142]: INFO : Ignition 2.19.0 Jan 20 01:40:20.433325 ignition[1142]: INFO : Stage: umount Jan 20 01:40:20.433325 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:20.433325 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:20.433325 ignition[1142]: INFO : umount: umount passed Jan 20 01:40:20.433325 ignition[1142]: INFO : Ignition finished successfully Jan 20 01:40:20.428869 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:40:20.436739 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:40:20.436923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:20.445588 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:40:20.445691 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:40:20.463960 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:40:20.464670 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:40:20.464773 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:40:20.479701 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:40:20.479769 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:40:20.485557 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:40:20.485606 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:40:20.493868 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 01:40:20.493905 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 01:40:20.502327 systemd[1]: Stopped target network.target - Network. Jan 20 01:40:20.510976 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:40:20.511036 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:40:20.520725 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:40:20.527582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:40:20.535803 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:20.543911 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:40:20.551476 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:40:20.564214 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:40:20.564266 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:40:20.572628 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:40:20.572665 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:40:20.580883 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:40:20.580930 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:40:20.588304 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:40:20.588336 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:40:20.596657 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:40:20.605172 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:40:20.612528 systemd-networkd[900]: eth0: DHCPv6 lease lost Jan 20 01:40:20.613981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:40:20.614059 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:40:20.632832 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:40:20.632957 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:40:20.641897 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:40:20.642021 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:40:20.649669 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:40:20.649718 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:20.669988 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:40:20.677692 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:40:20.801890 kernel: hv_netvsc 7ced8d89-6ab6-7ced-8d89-6ab67ced8d89 eth0: Data path switched from VF: enP16447s1 Jan 20 01:40:20.677768 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:40:20.686694 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:40:20.686735 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:20.695183 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:40:20.695220 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:20.703770 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:40:20.703832 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:20.714219 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:20.743126 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:40:20.743284 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:20.751687 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:40:20.751771 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:20.761090 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:40:20.761126 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:20.770068 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:40:20.770114 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:40:20.783386 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:40:20.783437 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:40:20.801949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:40:20.802009 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:20.823032 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:40:20.831059 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:40:20.831203 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:20.843136 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 01:40:20.843187 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:40:20.852436 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:40:20.852511 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:20.862526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:20.862564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:20.872932 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:40:20.875094 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:40:20.891667 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:40:20.891802 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:40:20.969508 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:40:20.969628 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:40:20.976293 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:40:20.984268 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:40:20.984384 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:40:21.011038 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:40:21.120872 systemd[1]: Switching root. Jan 20 01:40:21.218722 systemd-journald[217]: Journal stopped Jan 20 01:40:25.578991 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 20 01:40:25.579016 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:40:25.579027 kernel: SELinux: policy capability open_perms=1 Jan 20 01:40:25.579038 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:40:25.579046 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:40:25.579053 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:40:25.579062 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:40:25.579071 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:40:25.579079 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:40:25.579087 kernel: audit: type=1403 audit(1768873222.303:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:40:25.579097 systemd[1]: Successfully loaded SELinux policy in 176.484ms. Jan 20 01:40:25.579107 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.966ms. Jan 20 01:40:25.579117 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:40:25.579126 systemd[1]: Detected virtualization microsoft. Jan 20 01:40:25.579136 systemd[1]: Detected architecture arm64. Jan 20 01:40:25.579146 systemd[1]: Detected first boot. Jan 20 01:40:25.579155 systemd[1]: Hostname set to . Jan 20 01:40:25.579164 systemd[1]: Initializing machine ID from random generator. Jan 20 01:40:25.579173 zram_generator::config[1184]: No configuration found. Jan 20 01:40:25.579184 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:40:25.579196 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:40:25.579206 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:40:25.579216 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:40:25.579225 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:40:25.579234 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:40:25.579244 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:40:25.579253 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:40:25.579262 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:40:25.579273 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:40:25.579283 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:40:25.579306 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:40:25.579315 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:25.579325 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:25.579334 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:40:25.579344 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:40:25.579353 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:40:25.579363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:40:25.579373 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 20 01:40:25.579383 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:25.579392 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:40:25.579404 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:40:25.579414 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:40:25.579424 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:40:25.579433 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:25.579444 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:40:25.579454 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:40:25.579463 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:40:25.579473 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:40:25.579482 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:40:25.579492 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:25.579501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:25.579512 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:25.579522 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:40:25.579531 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:40:25.579541 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:40:25.579551 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:40:25.579560 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:40:25.579571 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:40:25.579581 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:40:25.579592 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:40:25.579601 systemd[1]: Reached target machines.target - Containers. Jan 20 01:40:25.579611 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:40:25.579621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:25.579631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:40:25.579641 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:40:25.579651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:25.579661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:40:25.579671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:25.579680 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:40:25.579690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:40:25.579700 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:40:25.579710 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:40:25.579719 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:40:25.579729 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:40:25.579740 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:40:25.579750 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:40:25.579759 kernel: fuse: init (API version 7.39) Jan 20 01:40:25.579767 kernel: ACPI: bus type drm_connector registered Jan 20 01:40:25.579776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:40:25.579792 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:40:25.579802 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:40:25.579811 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:40:25.579840 systemd-journald[1287]: Collecting audit messages is disabled. Jan 20 01:40:25.579863 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:40:25.579873 systemd-journald[1287]: Journal started Jan 20 01:40:25.579895 systemd-journald[1287]: Runtime Journal (/run/log/journal/be55941464374137848477dcf9df2634) is 8.0M, max 78.5M, 70.5M free. Jan 20 01:40:24.613030 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:40:24.860465 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 20 01:40:24.860824 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:40:24.861119 systemd[1]: systemd-journald.service: Consumed 2.480s CPU time. Jan 20 01:40:25.586543 kernel: loop: module loaded Jan 20 01:40:25.586604 systemd[1]: Stopped verity-setup.service. Jan 20 01:40:25.603895 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:40:25.605087 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:40:25.610448 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:40:25.619019 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:40:25.623170 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:40:25.628015 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:40:25.632939 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:40:25.638808 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:40:25.644281 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:25.649844 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:40:25.650051 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:40:25.655485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:25.655683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:25.661340 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:40:25.661469 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:40:25.666201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:25.666319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:25.671487 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:40:25.671615 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:40:25.676472 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:40:25.676596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:40:25.681281 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:25.686339 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:40:25.692003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:40:25.697374 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:25.710601 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:40:25.722877 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:40:25.728493 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:40:25.733137 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:40:25.733169 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:40:25.738542 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 01:40:25.745388 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:40:25.751736 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:40:25.756896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:25.830965 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:40:25.836532 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:40:25.841558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:40:25.842528 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:40:25.847197 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:40:25.848144 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:40:25.856387 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:40:25.864917 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:40:25.877746 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 01:40:25.888269 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:40:25.893903 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:40:25.899057 systemd-journald[1287]: Time spent on flushing to /var/log/journal/be55941464374137848477dcf9df2634 is 13.695ms for 892 entries. Jan 20 01:40:25.899057 systemd-journald[1287]: System Journal (/var/log/journal/be55941464374137848477dcf9df2634) is 8.0M, max 2.6G, 2.6G free. Jan 20 01:40:25.932581 systemd-journald[1287]: Received client request to flush runtime journal. Jan 20 01:40:25.932630 kernel: loop0: detected capacity change from 0 to 31320 Jan 20 01:40:25.904889 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:40:25.913990 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:40:25.923671 udevadm[1321]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 20 01:40:25.924571 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:40:25.936089 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 01:40:25.941736 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:40:25.965156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:25.977479 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:40:25.978814 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 01:40:26.027002 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Jan 20 01:40:26.027018 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Jan 20 01:40:26.032041 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:40:26.045930 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:40:26.133353 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:40:26.142946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:40:26.156117 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jan 20 01:40:26.156436 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jan 20 01:40:26.161821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:26.271807 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:40:26.307803 kernel: loop1: detected capacity change from 0 to 114328 Jan 20 01:40:26.637866 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:40:26.652016 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:26.670343 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Jan 20 01:40:26.729812 kernel: loop2: detected capacity change from 0 to 114424 Jan 20 01:40:26.800293 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:26.814405 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:40:26.850977 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 20 01:40:26.859959 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:40:26.943673 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:40:26.964210 kernel: hv_vmbus: registering driver hv_balloon Jan 20 01:40:26.964295 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 20 01:40:26.969677 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 20 01:40:26.969769 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:40:26.983797 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:26.998150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:27.017705 kernel: hv_vmbus: registering driver hyperv_fb Jan 20 01:40:27.017835 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 20 01:40:27.018862 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:27.019068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:27.028583 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 20 01:40:27.028664 kernel: Console: switching to colour dummy device 80x25 Jan 20 01:40:27.034858 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:40:27.046971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:27.055931 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:27.056884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:27.074526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:27.084431 systemd-networkd[1356]: lo: Link UP Jan 20 01:40:27.084442 systemd-networkd[1356]: lo: Gained carrier Jan 20 01:40:27.088369 systemd-networkd[1356]: Enumeration completed Jan 20 01:40:27.088535 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:40:27.090916 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:27.090919 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:40:27.095639 kernel: loop3: detected capacity change from 0 to 211168 Jan 20 01:40:27.099045 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:40:27.137190 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1354) Jan 20 01:40:27.154244 kernel: loop4: detected capacity change from 0 to 31320 Jan 20 01:40:27.154358 kernel: mlx5_core 403f:00:02.0 enP16447s1: Link up Jan 20 01:40:27.173830 kernel: loop5: detected capacity change from 0 to 114328 Jan 20 01:40:27.182944 kernel: hv_netvsc 7ced8d89-6ab6-7ced-8d89-6ab67ced8d89 eth0: Data path switched to VF: enP16447s1 Jan 20 01:40:27.183219 systemd-networkd[1356]: enP16447s1: Link UP Jan 20 01:40:27.183357 systemd-networkd[1356]: eth0: Link UP Jan 20 01:40:27.183360 systemd-networkd[1356]: eth0: Gained carrier Jan 20 01:40:27.183376 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:27.188103 systemd-networkd[1356]: enP16447s1: Gained carrier Jan 20 01:40:27.195899 kernel: loop6: detected capacity change from 0 to 114424 Jan 20 01:40:27.203517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:40:27.205839 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:40:27.215912 kernel: loop7: detected capacity change from 0 to 211168 Jan 20 01:40:27.218401 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:40:27.239989 (sd-merge)[1422]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 20 01:40:27.240405 (sd-merge)[1422]: Merged extensions into '/usr'. Jan 20 01:40:27.244469 systemd[1]: Reloading requested from client PID 1318 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:40:27.244483 systemd[1]: Reloading... Jan 20 01:40:27.313817 zram_generator::config[1472]: No configuration found. Jan 20 01:40:27.446171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:40:27.518151 systemd[1]: Reloading finished in 273 ms. Jan 20 01:40:27.544669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:27.551333 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:40:27.558187 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:40:27.569929 systemd[1]: Starting ensure-sysext.service... Jan 20 01:40:27.588939 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:40:27.597902 systemd[1]: Reloading requested from client PID 1531 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:40:27.597917 systemd[1]: Reloading... Jan 20 01:40:27.605678 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:40:27.607092 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:40:27.607714 systemd-tmpfiles[1532]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:40:27.609247 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. Jan 20 01:40:27.609298 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. Jan 20 01:40:27.644763 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:40:27.644880 systemd-tmpfiles[1532]: Skipping /boot Jan 20 01:40:27.656117 systemd-tmpfiles[1532]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:40:27.656285 systemd-tmpfiles[1532]: Skipping /boot Jan 20 01:40:27.681872 zram_generator::config[1563]: No configuration found. Jan 20 01:40:27.781349 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:40:27.855522 systemd[1]: Reloading finished in 257 ms. Jan 20 01:40:27.872039 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 01:40:27.884297 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:27.900984 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 01:40:27.931965 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:40:27.939288 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 01:40:27.947053 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:40:27.955006 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:40:27.962457 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:40:27.969656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:27.973053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:27.982563 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:27.991252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:40:27.997143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:27.997964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:27.999862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:28.006409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:28.006530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:28.018966 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:40:28.019132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:40:28.029867 lvm[1625]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 01:40:28.029135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:28.036901 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:28.050095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:28.058948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:28.060842 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:40:28.070478 systemd-resolved[1632]: Positive Trust Anchors: Jan 20 01:40:28.070496 systemd-resolved[1632]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:40:28.070527 systemd-resolved[1632]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:40:28.074361 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:40:28.077519 augenrules[1652]: No rules Jan 20 01:40:28.081427 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 01:40:28.086963 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 01:40:28.093697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:28.093879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:28.100055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:28.100178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:28.111218 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:28.116128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:28.122991 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 01:40:28.127149 lvm[1662]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 01:40:28.130177 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:28.138035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:40:28.147570 systemd-resolved[1632]: Using system hostname 'ci-4081.3.6-n-e5d82fe73a'. Jan 20 01:40:28.159292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:28.167018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:40:28.171642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:28.171848 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:40:28.177161 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:40:28.183824 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 01:40:28.190374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:28.190660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:28.196015 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:40:28.196269 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:40:28.201343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:28.201598 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:28.207751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:40:28.208130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:40:28.217414 systemd[1]: Finished ensure-sysext.service. Jan 20 01:40:28.223628 systemd[1]: Reached target network.target - Network. Jan 20 01:40:28.227616 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:28.232621 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:40:28.232798 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:40:28.504102 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:40:28.511050 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:40:28.528869 systemd-networkd[1356]: eth0: Gained IPv6LL Jan 20 01:40:28.531314 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:40:28.536972 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:40:30.741260 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:40:30.749741 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:40:30.760909 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:40:30.772842 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:40:30.777562 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:40:30.781871 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:40:30.786870 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:40:30.792257 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:40:30.796562 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:40:30.801571 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:40:30.806914 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:40:30.806946 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:40:30.810600 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:40:30.814938 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:40:30.820545 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:40:30.858457 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:40:30.863261 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:40:30.867682 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:40:30.871459 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:40:30.875176 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:40:30.875196 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:40:30.883867 systemd[1]: Starting chronyd.service - NTP client/server... Jan 20 01:40:30.889923 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:40:30.905961 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 01:40:30.910558 (chronyd)[1680]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 20 01:40:30.916947 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:40:30.924901 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:40:30.935979 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:40:30.939145 jq[1686]: false Jan 20 01:40:30.936684 chronyd[1689]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 20 01:40:30.942183 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:40:30.942224 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 20 01:40:30.944096 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 20 01:40:30.948704 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 20 01:40:30.949897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:40:30.956174 KVP[1690]: KVP starting; pid is:1690 Jan 20 01:40:30.956773 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:40:30.966025 chronyd[1689]: Timezone right/UTC failed leap second check, ignoring Jan 20 01:40:30.966415 chronyd[1689]: Loaded seccomp filter (level 2) Jan 20 01:40:30.969513 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:40:30.975924 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:40:30.982957 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:40:30.992743 extend-filesystems[1687]: Found loop4 Jan 20 01:40:30.992743 extend-filesystems[1687]: Found loop5 Jan 20 01:40:30.992743 extend-filesystems[1687]: Found loop6 Jan 20 01:40:31.017139 extend-filesystems[1687]: Found loop7 Jan 20 01:40:31.017139 extend-filesystems[1687]: Found sda Jan 20 01:40:31.017139 extend-filesystems[1687]: Found sda1 Jan 20 01:40:31.017139 extend-filesystems[1687]: Found sda2 Jan 20 01:40:31.017139 extend-filesystems[1687]: Found sda3 Jan 20 01:40:31.017139 extend-filesystems[1687]: Found usr Jan 20 01:40:31.017139 extend-filesystems[1687]: Found sda4 Jan 20 01:40:31.017139 extend-filesystems[1687]: Found sda6 Jan 20 01:40:31.017139 extend-filesystems[1687]: Found sda7 Jan 20 01:40:31.017139 extend-filesystems[1687]: Found sda9 Jan 20 01:40:31.017139 extend-filesystems[1687]: Checking size of /dev/sda9 Jan 20 01:40:31.171254 kernel: hv_utils: KVP IC version 4.0 Jan 20 01:40:30.997666 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:40:31.038392 dbus-daemon[1683]: [system] SELinux support is enabled Jan 20 01:40:31.172205 extend-filesystems[1687]: Old size kept for /dev/sda9 Jan 20 01:40:31.172205 extend-filesystems[1687]: Found sr0 Jan 20 01:40:31.021128 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:40:31.076142 KVP[1690]: KVP LIC Version: 3.1 Jan 20 01:40:31.192307 coreos-metadata[1682]: Jan 20 01:40:31.182 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:40:31.192307 coreos-metadata[1682]: Jan 20 01:40:31.187 INFO Fetch successful Jan 20 01:40:31.192307 coreos-metadata[1682]: Jan 20 01:40:31.187 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 20 01:40:31.028944 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:40:31.029424 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:40:31.034976 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:40:31.200272 coreos-metadata[1682]: Jan 20 01:40:31.192 INFO Fetch successful Jan 20 01:40:31.200272 coreos-metadata[1682]: Jan 20 01:40:31.192 INFO Fetching http://168.63.129.16/machine/980424b7-1d22-4cbc-860a-89399520f18c/b19e6d23%2Dcb86%2D4fa0%2D8260%2Dec0844978096.%5Fci%2D4081.3.6%2Dn%2De5d82fe73a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 20 01:40:31.200272 coreos-metadata[1682]: Jan 20 01:40:31.199 INFO Fetch successful Jan 20 01:40:31.200364 jq[1712]: true Jan 20 01:40:31.056932 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:40:31.200597 update_engine[1710]: I20260120 01:40:31.159054 1710 main.cc:92] Flatcar Update Engine starting Jan 20 01:40:31.200597 update_engine[1710]: I20260120 01:40:31.163726 1710 update_check_scheduler.cc:74] Next update check in 11m12s Jan 20 01:40:31.070999 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:40:31.212143 coreos-metadata[1682]: Jan 20 01:40:31.201 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:40:31.096835 systemd[1]: Started chronyd.service - NTP client/server. Jan 20 01:40:31.117120 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:40:31.117296 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:40:31.117537 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:40:31.117665 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:40:31.147270 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:40:31.147458 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:40:31.156385 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:40:31.188149 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:40:31.188304 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:40:31.223894 coreos-metadata[1682]: Jan 20 01:40:31.216 INFO Fetch successful Jan 20 01:40:31.232101 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:40:31.232132 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:40:31.247334 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:40:31.247355 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:40:31.268096 (ntainerd)[1737]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:40:31.271376 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:40:31.276797 tar[1728]: linux-arm64/LICENSE Jan 20 01:40:31.276797 tar[1728]: linux-arm64/helm Jan 20 01:40:31.286573 systemd-logind[1707]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 20 01:40:31.286843 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:40:31.293382 systemd-logind[1707]: New seat seat0. Jan 20 01:40:31.295210 jq[1736]: true Jan 20 01:40:31.297174 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:40:31.329797 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1731) Jan 20 01:40:31.387704 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 01:40:31.392988 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:40:31.441704 bash[1794]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:40:31.444819 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:40:31.456271 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:40:31.601873 locksmithd[1754]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:40:31.983541 containerd[1737]: time="2026-01-20T01:40:31.983381760Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 01:40:32.048214 sshd_keygen[1711]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:40:32.050528 containerd[1737]: time="2026-01-20T01:40:32.050488320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:32.053164 containerd[1737]: time="2026-01-20T01:40:32.053131280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:32.053244 containerd[1737]: time="2026-01-20T01:40:32.053229840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 01:40:32.053374 containerd[1737]: time="2026-01-20T01:40:32.053284680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 01:40:32.053504 containerd[1737]: time="2026-01-20T01:40:32.053487400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 01:40:32.053797 containerd[1737]: time="2026-01-20T01:40:32.053564760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:32.053797 containerd[1737]: time="2026-01-20T01:40:32.053635600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:32.053797 containerd[1737]: time="2026-01-20T01:40:32.053648760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:32.053928 containerd[1737]: time="2026-01-20T01:40:32.053908240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:32.053974 containerd[1737]: time="2026-01-20T01:40:32.053963240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:32.054037 containerd[1737]: time="2026-01-20T01:40:32.054024680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:32.054651 containerd[1737]: time="2026-01-20T01:40:32.054074440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:32.054651 containerd[1737]: time="2026-01-20T01:40:32.054162680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:32.054651 containerd[1737]: time="2026-01-20T01:40:32.054372120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:32.054651 containerd[1737]: time="2026-01-20T01:40:32.054477640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:32.054651 containerd[1737]: time="2026-01-20T01:40:32.054490560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 01:40:32.054651 containerd[1737]: time="2026-01-20T01:40:32.054576720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 01:40:32.054651 containerd[1737]: time="2026-01-20T01:40:32.054616400Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.066348120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.066413000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.066428800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.066445280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.066462040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.066624560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.066928320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.067024960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.067044720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.067057920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.067070840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.067085240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.067099960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 01:40:32.068162 containerd[1737]: time="2026-01-20T01:40:32.067115800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067131680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067146800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067159680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067171560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067192640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067206000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067218360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067231000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067243080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067255920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067267360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067281560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067294440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068486 containerd[1737]: time="2026-01-20T01:40:32.067307720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067319120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067335480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067348640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067365040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067385280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067397120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067407600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067450600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067467600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067477840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067490480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067499680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067511600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 01:40:32.068811 containerd[1737]: time="2026-01-20T01:40:32.067522760Z" level=info msg="NRI interface is disabled by configuration." Jan 20 01:40:32.069058 containerd[1737]: time="2026-01-20T01:40:32.067533440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 01:40:32.070402 containerd[1737]: time="2026-01-20T01:40:32.070328080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 01:40:32.070606 containerd[1737]: time="2026-01-20T01:40:32.070590080Z" level=info msg="Connect containerd service" Jan 20 01:40:32.071201 containerd[1737]: time="2026-01-20T01:40:32.070819600Z" level=info msg="using legacy CRI server" Jan 20 01:40:32.071201 containerd[1737]: time="2026-01-20T01:40:32.070836920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:40:32.071201 containerd[1737]: time="2026-01-20T01:40:32.070946360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 01:40:32.073107 containerd[1737]: time="2026-01-20T01:40:32.072996600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:40:32.073278 containerd[1737]: time="2026-01-20T01:40:32.073232040Z" level=info msg="Start subscribing containerd event" Jan 20 01:40:32.073364 containerd[1737]: time="2026-01-20T01:40:32.073350880Z" level=info msg="Start recovering state" Jan 20 01:40:32.074056 containerd[1737]: time="2026-01-20T01:40:32.073482880Z" level=info msg="Start event monitor" Jan 20 01:40:32.074056 containerd[1737]: time="2026-01-20T01:40:32.073505040Z" level=info msg="Start snapshots syncer" Jan 20 01:40:32.074056 containerd[1737]: time="2026-01-20T01:40:32.073515480Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:40:32.074056 containerd[1737]: time="2026-01-20T01:40:32.073527200Z" level=info msg="Start streaming server" Jan 20 01:40:32.074970 containerd[1737]: time="2026-01-20T01:40:32.074877400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:40:32.074970 containerd[1737]: time="2026-01-20T01:40:32.074938400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:40:32.077233 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:40:32.078235 containerd[1737]: time="2026-01-20T01:40:32.076369360Z" level=info msg="containerd successfully booted in 0.093710s" Jan 20 01:40:32.086697 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:40:32.101733 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:40:32.116853 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 20 01:40:32.125696 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:40:32.126625 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:40:32.143019 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:40:32.155072 tar[1728]: linux-arm64/README.md Jan 20 01:40:32.169249 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:40:32.182990 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 20 01:40:32.189105 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:40:32.204738 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:40:32.211323 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 20 01:40:32.217498 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:40:32.346093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:40:32.351429 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:40:32.352176 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:40:32.361669 systemd[1]: Startup finished in 594ms (kernel) + 11.147s (initrd) + 10.233s (userspace) = 21.975s. Jan 20 01:40:32.645635 login[1840]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:32.645922 login[1839]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:40:32.655939 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:40:32.663937 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:40:32.666338 systemd-logind[1707]: New session 1 of user core. Jan 20 01:40:32.669840 systemd-logind[1707]: New session 2 of user core. Jan 20 01:40:32.695872 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:40:32.705036 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:40:32.708142 (systemd)[1858]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:40:32.813540 kubelet[1846]: E0120 01:40:32.813495 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:40:32.816556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:40:32.816679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:40:32.838452 systemd[1858]: Queued start job for default target default.target. Jan 20 01:40:32.848779 systemd[1858]: Created slice app.slice - User Application Slice. Jan 20 01:40:32.848938 systemd[1858]: Reached target paths.target - Paths. Jan 20 01:40:32.849010 systemd[1858]: Reached target timers.target - Timers. Jan 20 01:40:32.850268 systemd[1858]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:40:32.862686 systemd[1858]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:40:32.862807 systemd[1858]: Reached target sockets.target - Sockets. Jan 20 01:40:32.862820 systemd[1858]: Reached target basic.target - Basic System. Jan 20 01:40:32.862863 systemd[1858]: Reached target default.target - Main User Target. Jan 20 01:40:32.862889 systemd[1858]: Startup finished in 148ms. Jan 20 01:40:32.862987 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:40:32.864181 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:40:32.864822 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:40:33.749806 waagent[1836]: 2026-01-20T01:40:33.746269Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 20 01:40:33.751181 waagent[1836]: 2026-01-20T01:40:33.751115Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 20 01:40:33.754624 waagent[1836]: 2026-01-20T01:40:33.754579Z INFO Daemon Daemon Python: 3.11.9 Jan 20 01:40:33.758050 waagent[1836]: 2026-01-20T01:40:33.757853Z INFO Daemon Daemon Run daemon Jan 20 01:40:33.761390 waagent[1836]: 2026-01-20T01:40:33.761346Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 20 01:40:33.768060 waagent[1836]: 2026-01-20T01:40:33.768010Z INFO Daemon Daemon Using waagent for provisioning Jan 20 01:40:33.772042 waagent[1836]: 2026-01-20T01:40:33.772005Z INFO Daemon Daemon Activate resource disk Jan 20 01:40:33.775537 waagent[1836]: 2026-01-20T01:40:33.775502Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 20 01:40:33.784484 waagent[1836]: 2026-01-20T01:40:33.784436Z INFO Daemon Daemon Found device: None Jan 20 01:40:33.787975 waagent[1836]: 2026-01-20T01:40:33.787937Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 20 01:40:33.794575 waagent[1836]: 2026-01-20T01:40:33.794536Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 20 01:40:33.804445 waagent[1836]: 2026-01-20T01:40:33.804396Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 01:40:33.809429 waagent[1836]: 2026-01-20T01:40:33.809388Z INFO Daemon Daemon Running default provisioning handler Jan 20 01:40:33.821807 waagent[1836]: 2026-01-20T01:40:33.819523Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 20 01:40:33.829931 waagent[1836]: 2026-01-20T01:40:33.829875Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 20 01:40:33.836900 waagent[1836]: 2026-01-20T01:40:33.836859Z INFO Daemon Daemon cloud-init is enabled: False Jan 20 01:40:33.840827 waagent[1836]: 2026-01-20T01:40:33.840793Z INFO Daemon Daemon Copying ovf-env.xml Jan 20 01:40:33.942808 waagent[1836]: 2026-01-20T01:40:33.939188Z INFO Daemon Daemon Successfully mounted dvd Jan 20 01:40:33.967344 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 20 01:40:33.969249 waagent[1836]: 2026-01-20T01:40:33.969180Z INFO Daemon Daemon Detect protocol endpoint Jan 20 01:40:33.973610 waagent[1836]: 2026-01-20T01:40:33.973545Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 01:40:33.977979 waagent[1836]: 2026-01-20T01:40:33.977927Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 20 01:40:33.982987 waagent[1836]: 2026-01-20T01:40:33.982941Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 20 01:40:33.987105 waagent[1836]: 2026-01-20T01:40:33.987063Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 20 01:40:33.991022 waagent[1836]: 2026-01-20T01:40:33.990981Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 20 01:40:34.034367 waagent[1836]: 2026-01-20T01:40:34.034265Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 20 01:40:34.039575 waagent[1836]: 2026-01-20T01:40:34.039547Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 20 01:40:34.043749 waagent[1836]: 2026-01-20T01:40:34.043702Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 20 01:40:34.301933 waagent[1836]: 2026-01-20T01:40:34.301794Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 20 01:40:34.306932 waagent[1836]: 2026-01-20T01:40:34.306875Z INFO Daemon Daemon Forcing an update of the goal state. Jan 20 01:40:34.314340 waagent[1836]: 2026-01-20T01:40:34.314295Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 01:40:34.331694 waagent[1836]: 2026-01-20T01:40:34.331652Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 20 01:40:34.336050 waagent[1836]: 2026-01-20T01:40:34.336009Z INFO Daemon Jan 20 01:40:34.338152 waagent[1836]: 2026-01-20T01:40:34.338114Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: c5b0d97c-a66f-4a23-8fe7-95d6980c4b43 eTag: 12619298698648856449 source: Fabric] Jan 20 01:40:34.346505 waagent[1836]: 2026-01-20T01:40:34.346466Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 20 01:40:34.351567 waagent[1836]: 2026-01-20T01:40:34.351528Z INFO Daemon Jan 20 01:40:34.353651 waagent[1836]: 2026-01-20T01:40:34.353618Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 20 01:40:34.362733 waagent[1836]: 2026-01-20T01:40:34.362700Z INFO Daemon Daemon Downloading artifacts profile blob Jan 20 01:40:34.429971 waagent[1836]: 2026-01-20T01:40:34.429891Z INFO Daemon Downloaded certificate {'thumbprint': '84BD63BE293F3BBFDC49F71CF5C5AFF11D7BB4B8', 'hasPrivateKey': True} Jan 20 01:40:34.437391 waagent[1836]: 2026-01-20T01:40:34.437344Z INFO Daemon Fetch goal state completed Jan 20 01:40:34.446875 waagent[1836]: 2026-01-20T01:40:34.446823Z INFO Daemon Daemon Starting provisioning Jan 20 01:40:34.450730 waagent[1836]: 2026-01-20T01:40:34.450688Z INFO Daemon Daemon Handle ovf-env.xml. Jan 20 01:40:34.454318 waagent[1836]: 2026-01-20T01:40:34.454284Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-e5d82fe73a] Jan 20 01:40:34.460702 waagent[1836]: 2026-01-20T01:40:34.460647Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-e5d82fe73a] Jan 20 01:40:34.465949 waagent[1836]: 2026-01-20T01:40:34.465902Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 20 01:40:34.471302 waagent[1836]: 2026-01-20T01:40:34.471258Z INFO Daemon Daemon Primary interface is [eth0] Jan 20 01:40:34.513299 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:34.513306 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:40:34.513330 systemd-networkd[1356]: eth0: DHCP lease lost Jan 20 01:40:34.514584 waagent[1836]: 2026-01-20T01:40:34.514516Z INFO Daemon Daemon Create user account if not exists Jan 20 01:40:34.518911 waagent[1836]: 2026-01-20T01:40:34.518859Z INFO Daemon Daemon User core already exists, skip useradd Jan 20 01:40:34.519862 systemd-networkd[1356]: eth0: DHCPv6 lease lost Jan 20 01:40:34.523371 waagent[1836]: 2026-01-20T01:40:34.523315Z INFO Daemon Daemon Configure sudoer Jan 20 01:40:34.526845 waagent[1836]: 2026-01-20T01:40:34.526763Z INFO Daemon Daemon Configure sshd Jan 20 01:40:34.530259 waagent[1836]: 2026-01-20T01:40:34.530213Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 20 01:40:34.539498 waagent[1836]: 2026-01-20T01:40:34.539454Z INFO Daemon Daemon Deploy ssh public key. Jan 20 01:40:34.551839 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:40:35.644776 waagent[1836]: 2026-01-20T01:40:35.644729Z INFO Daemon Daemon Provisioning complete Jan 20 01:40:35.660044 waagent[1836]: 2026-01-20T01:40:35.659996Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 20 01:40:35.665090 waagent[1836]: 2026-01-20T01:40:35.665044Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 20 01:40:35.672920 waagent[1836]: 2026-01-20T01:40:35.672878Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 20 01:40:35.801408 waagent[1908]: 2026-01-20T01:40:35.800799Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 20 01:40:35.801408 waagent[1908]: 2026-01-20T01:40:35.800940Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 20 01:40:35.801408 waagent[1908]: 2026-01-20T01:40:35.800993Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 20 01:40:35.838135 waagent[1908]: 2026-01-20T01:40:35.838060Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 20 01:40:35.838449 waagent[1908]: 2026-01-20T01:40:35.838413Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:40:35.838584 waagent[1908]: 2026-01-20T01:40:35.838553Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:40:35.845971 waagent[1908]: 2026-01-20T01:40:35.845913Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 01:40:35.851232 waagent[1908]: 2026-01-20T01:40:35.851192Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 20 01:40:35.851826 waagent[1908]: 2026-01-20T01:40:35.851769Z INFO ExtHandler Jan 20 01:40:35.851977 waagent[1908]: 2026-01-20T01:40:35.851943Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 18e261ed-62fb-41ea-a843-c87f90575564 eTag: 12619298698648856449 source: Fabric] Jan 20 01:40:35.852342 waagent[1908]: 2026-01-20T01:40:35.852303Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 01:40:35.853655 waagent[1908]: 2026-01-20T01:40:35.852999Z INFO ExtHandler Jan 20 01:40:35.853655 waagent[1908]: 2026-01-20T01:40:35.853079Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 20 01:40:35.856797 waagent[1908]: 2026-01-20T01:40:35.856532Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 01:40:35.920513 waagent[1908]: 2026-01-20T01:40:35.920376Z INFO ExtHandler Downloaded certificate {'thumbprint': '84BD63BE293F3BBFDC49F71CF5C5AFF11D7BB4B8', 'hasPrivateKey': True} Jan 20 01:40:35.920999 waagent[1908]: 2026-01-20T01:40:35.920960Z INFO ExtHandler Fetch goal state completed Jan 20 01:40:35.935066 waagent[1908]: 2026-01-20T01:40:35.935008Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1908 Jan 20 01:40:35.935213 waagent[1908]: 2026-01-20T01:40:35.935180Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 20 01:40:35.936817 waagent[1908]: 2026-01-20T01:40:35.936767Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 20 01:40:35.937176 waagent[1908]: 2026-01-20T01:40:35.937140Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 20 01:40:35.971220 waagent[1908]: 2026-01-20T01:40:35.971177Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 20 01:40:35.971410 waagent[1908]: 2026-01-20T01:40:35.971374Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 20 01:40:35.977504 waagent[1908]: 2026-01-20T01:40:35.977455Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 20 01:40:35.983800 systemd[1]: Reloading requested from client PID 1921 ('systemctl') (unit waagent.service)... Jan 20 01:40:35.983812 systemd[1]: Reloading... Jan 20 01:40:36.061810 zram_generator::config[1958]: No configuration found. Jan 20 01:40:36.160434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:40:36.241425 systemd[1]: Reloading finished in 257 ms. Jan 20 01:40:36.268811 waagent[1908]: 2026-01-20T01:40:36.265909Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 20 01:40:36.272372 systemd[1]: Reloading requested from client PID 2009 ('systemctl') (unit waagent.service)... Jan 20 01:40:36.272384 systemd[1]: Reloading... Jan 20 01:40:36.351962 zram_generator::config[2044]: No configuration found. Jan 20 01:40:36.448652 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:40:36.523168 systemd[1]: Reloading finished in 250 ms. Jan 20 01:40:36.551152 waagent[1908]: 2026-01-20T01:40:36.550966Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 20 01:40:36.551236 waagent[1908]: 2026-01-20T01:40:36.551173Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 20 01:40:37.495288 waagent[1908]: 2026-01-20T01:40:37.495203Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 20 01:40:37.495869 waagent[1908]: 2026-01-20T01:40:37.495821Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 20 01:40:37.496628 waagent[1908]: 2026-01-20T01:40:37.496554Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 20 01:40:37.497052 waagent[1908]: 2026-01-20T01:40:37.496917Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 20 01:40:37.498046 waagent[1908]: 2026-01-20T01:40:37.497313Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:40:37.498046 waagent[1908]: 2026-01-20T01:40:37.497399Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:40:37.498046 waagent[1908]: 2026-01-20T01:40:37.497596Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 20 01:40:37.498046 waagent[1908]: 2026-01-20T01:40:37.497755Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 20 01:40:37.498046 waagent[1908]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 20 01:40:37.498046 waagent[1908]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 20 01:40:37.498046 waagent[1908]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 20 01:40:37.498046 waagent[1908]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:40:37.498046 waagent[1908]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:40:37.498046 waagent[1908]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:40:37.498373 waagent[1908]: 2026-01-20T01:40:37.498317Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:40:37.498530 waagent[1908]: 2026-01-20T01:40:37.498487Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 20 01:40:37.498585 waagent[1908]: 2026-01-20T01:40:37.498539Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 20 01:40:37.498758 waagent[1908]: 2026-01-20T01:40:37.498718Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:40:37.499207 waagent[1908]: 2026-01-20T01:40:37.499155Z INFO EnvHandler ExtHandler Configure routes Jan 20 01:40:37.499384 waagent[1908]: 2026-01-20T01:40:37.499342Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 20 01:40:37.499424 waagent[1908]: 2026-01-20T01:40:37.499389Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 20 01:40:37.499678 waagent[1908]: 2026-01-20T01:40:37.499633Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 20 01:40:37.500078 waagent[1908]: 2026-01-20T01:40:37.500034Z INFO EnvHandler ExtHandler Gateway:None Jan 20 01:40:37.501006 waagent[1908]: 2026-01-20T01:40:37.500970Z INFO EnvHandler ExtHandler Routes:None Jan 20 01:40:37.505195 waagent[1908]: 2026-01-20T01:40:37.505152Z INFO ExtHandler ExtHandler Jan 20 01:40:37.505516 waagent[1908]: 2026-01-20T01:40:37.505475Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8c416143-a252-4145-a617-f62d43255cfe correlation da705036-8576-449e-acb2-417d8262f9ae created: 2026-01-20T01:39:41.019356Z] Jan 20 01:40:37.506514 waagent[1908]: 2026-01-20T01:40:37.506462Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 01:40:37.508618 waagent[1908]: 2026-01-20T01:40:37.507948Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 20 01:40:37.535865 waagent[1908]: 2026-01-20T01:40:37.535455Z INFO MonitorHandler ExtHandler Network interfaces: Jan 20 01:40:37.535865 waagent[1908]: Executing ['ip', '-a', '-o', 'link']: Jan 20 01:40:37.535865 waagent[1908]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 20 01:40:37.535865 waagent[1908]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:89:6a:b6 brd ff:ff:ff:ff:ff:ff Jan 20 01:40:37.535865 waagent[1908]: 3: enP16447s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:89:6a:b6 brd ff:ff:ff:ff:ff:ff\ altname enP16447p0s2 Jan 20 01:40:37.535865 waagent[1908]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 20 01:40:37.535865 waagent[1908]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 20 01:40:37.535865 waagent[1908]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 20 01:40:37.535865 waagent[1908]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 20 01:40:37.535865 waagent[1908]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 20 01:40:37.535865 waagent[1908]: 2: eth0 inet6 fe80::7eed:8dff:fe89:6ab6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 20 01:40:37.553440 waagent[1908]: 2026-01-20T01:40:37.553379Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F9D4F694-9143-4838-A098-5BC62E2C0ED3;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 20 01:40:37.600251 waagent[1908]: 2026-01-20T01:40:37.600173Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 20 01:40:37.600251 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:40:37.600251 waagent[1908]: pkts bytes target prot opt in out source destination Jan 20 01:40:37.600251 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:40:37.600251 waagent[1908]: pkts bytes target prot opt in out source destination Jan 20 01:40:37.600251 waagent[1908]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:40:37.600251 waagent[1908]: pkts bytes target prot opt in out source destination Jan 20 01:40:37.600251 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 01:40:37.600251 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 01:40:37.600251 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 01:40:37.602984 waagent[1908]: 2026-01-20T01:40:37.602931Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 20 01:40:37.602984 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:40:37.602984 waagent[1908]: pkts bytes target prot opt in out source destination Jan 20 01:40:37.602984 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:40:37.602984 waagent[1908]: pkts bytes target prot opt in out source destination Jan 20 01:40:37.602984 waagent[1908]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:40:37.602984 waagent[1908]: pkts bytes target prot opt in out source destination Jan 20 01:40:37.602984 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 01:40:37.602984 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 01:40:37.602984 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 01:40:37.603215 waagent[1908]: 2026-01-20T01:40:37.603182Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 20 01:40:42.889646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:40:42.897962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:40:42.996224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:40:43.007042 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:40:43.140229 kubelet[2136]: E0120 01:40:43.140117 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:40:43.143741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:40:43.144012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:40:53.389850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:40:53.400353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:40:53.734963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:40:53.738758 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:40:53.777100 kubelet[2151]: E0120 01:40:53.777045 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:40:53.780455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:40:53.780594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:40:54.753134 chronyd[1689]: Selected source PHC0 Jan 20 01:41:01.599818 waagent[1908]: 2026-01-20T01:41:01.599203Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 20 01:41:01.607921 waagent[1908]: 2026-01-20T01:41:01.607880Z INFO ExtHandler Jan 20 01:41:01.608023 waagent[1908]: 2026-01-20T01:41:01.607994Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ad7e23b6-1918-4477-a7b9-395a59109bee eTag: 16293504208046584279 source: Fabric] Jan 20 01:41:01.608364 waagent[1908]: 2026-01-20T01:41:01.608328Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 01:41:01.608987 waagent[1908]: 2026-01-20T01:41:01.608943Z INFO ExtHandler Jan 20 01:41:01.609052 waagent[1908]: 2026-01-20T01:41:01.609026Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 20 01:41:01.671119 waagent[1908]: 2026-01-20T01:41:01.671085Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 01:41:01.728235 waagent[1908]: 2026-01-20T01:41:01.728147Z INFO ExtHandler Downloaded certificate {'thumbprint': '84BD63BE293F3BBFDC49F71CF5C5AFF11D7BB4B8', 'hasPrivateKey': True} Jan 20 01:41:01.728753 waagent[1908]: 2026-01-20T01:41:01.728709Z INFO ExtHandler Fetch goal state completed Jan 20 01:41:01.729168 waagent[1908]: 2026-01-20T01:41:01.729127Z INFO ExtHandler ExtHandler Jan 20 01:41:01.729237 waagent[1908]: 2026-01-20T01:41:01.729209Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 2caa6714-1473-42a7-9cc6-31563e8a933c correlation da705036-8576-449e-acb2-417d8262f9ae created: 2026-01-20T01:40:55.581245Z] Jan 20 01:41:01.729550 waagent[1908]: 2026-01-20T01:41:01.729514Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 01:41:01.730075 waagent[1908]: 2026-01-20T01:41:01.730040Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 20 01:41:03.889759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:41:03.897954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:04.235545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:04.239283 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:04.272413 kubelet[2171]: E0120 01:41:04.272327 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:04.275322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:04.275461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:07.650949 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:41:07.657036 systemd[1]: Started sshd@0-10.200.20.17:22-10.200.16.10:43522.service - OpenSSH per-connection server daemon (10.200.16.10:43522). Jan 20 01:41:08.175347 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 43522 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:08.176666 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:08.181045 systemd-logind[1707]: New session 3 of user core. Jan 20 01:41:08.187958 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:41:08.589042 systemd[1]: Started sshd@1-10.200.20.17:22-10.200.16.10:43528.service - OpenSSH per-connection server daemon (10.200.16.10:43528). Jan 20 01:41:09.031464 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 43528 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:09.032773 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:09.036148 systemd-logind[1707]: New session 4 of user core. Jan 20 01:41:09.046933 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:41:09.361534 sshd[2183]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:09.364091 systemd-logind[1707]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:41:09.364669 systemd[1]: sshd@1-10.200.20.17:22-10.200.16.10:43528.service: Deactivated successfully. Jan 20 01:41:09.366213 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:41:09.368050 systemd-logind[1707]: Removed session 4. Jan 20 01:41:09.447544 systemd[1]: Started sshd@2-10.200.20.17:22-10.200.16.10:43534.service - OpenSSH per-connection server daemon (10.200.16.10:43534). Jan 20 01:41:09.900504 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 43534 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:09.901865 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:09.906385 systemd-logind[1707]: New session 5 of user core. Jan 20 01:41:09.911936 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:41:10.228709 sshd[2190]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:10.231788 systemd-logind[1707]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:41:10.231949 systemd[1]: sshd@2-10.200.20.17:22-10.200.16.10:43534.service: Deactivated successfully. Jan 20 01:41:10.233372 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:41:10.235289 systemd-logind[1707]: Removed session 5. Jan 20 01:41:10.310516 systemd[1]: Started sshd@3-10.200.20.17:22-10.200.16.10:34520.service - OpenSSH per-connection server daemon (10.200.16.10:34520). Jan 20 01:41:10.759314 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 34520 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:10.760647 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:10.764261 systemd-logind[1707]: New session 6 of user core. Jan 20 01:41:10.770922 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:41:11.094604 sshd[2197]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:11.097882 systemd[1]: sshd@3-10.200.20.17:22-10.200.16.10:34520.service: Deactivated successfully. Jan 20 01:41:11.099295 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:41:11.099881 systemd-logind[1707]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:41:11.100602 systemd-logind[1707]: Removed session 6. Jan 20 01:41:11.178988 systemd[1]: Started sshd@4-10.200.20.17:22-10.200.16.10:34528.service - OpenSSH per-connection server daemon (10.200.16.10:34528). Jan 20 01:41:11.619905 sshd[2204]: Accepted publickey for core from 10.200.16.10 port 34528 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:11.621194 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:11.624733 systemd-logind[1707]: New session 7 of user core. Jan 20 01:41:11.631916 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:41:12.000966 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:41:12.001229 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:12.062937 sudo[2207]: pam_unix(sudo:session): session closed for user root Jan 20 01:41:12.140062 sshd[2204]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:12.143217 systemd-logind[1707]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:41:12.144199 systemd[1]: sshd@4-10.200.20.17:22-10.200.16.10:34528.service: Deactivated successfully. Jan 20 01:41:12.145646 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:41:12.147554 systemd-logind[1707]: Removed session 7. Jan 20 01:41:12.220670 systemd[1]: Started sshd@5-10.200.20.17:22-10.200.16.10:34534.service - OpenSSH per-connection server daemon (10.200.16.10:34534). Jan 20 01:41:12.667528 sshd[2212]: Accepted publickey for core from 10.200.16.10 port 34534 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:12.669374 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:12.672926 systemd-logind[1707]: New session 8 of user core. Jan 20 01:41:12.682969 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:41:12.922654 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:41:12.922936 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:12.925758 sudo[2216]: pam_unix(sudo:session): session closed for user root Jan 20 01:41:12.930045 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 01:41:12.930286 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:12.947087 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 01:41:12.948538 auditctl[2219]: No rules Jan 20 01:41:12.948845 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:41:12.949006 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 01:41:12.952099 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 01:41:12.972571 augenrules[2237]: No rules Jan 20 01:41:12.973868 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 01:41:12.975079 sudo[2215]: pam_unix(sudo:session): session closed for user root Jan 20 01:41:13.052778 sshd[2212]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:13.055611 systemd-logind[1707]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:41:13.055890 systemd[1]: sshd@5-10.200.20.17:22-10.200.16.10:34534.service: Deactivated successfully. Jan 20 01:41:13.057485 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:41:13.059239 systemd-logind[1707]: Removed session 8. Jan 20 01:41:13.133687 systemd[1]: Started sshd@6-10.200.20.17:22-10.200.16.10:34540.service - OpenSSH per-connection server daemon (10.200.16.10:34540). Jan 20 01:41:13.579188 sshd[2245]: Accepted publickey for core from 10.200.16.10 port 34540 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:13.580494 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:13.584997 systemd-logind[1707]: New session 9 of user core. Jan 20 01:41:13.592957 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:41:13.834320 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:41:13.834591 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:14.389674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:41:14.395943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:14.985167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:14.989002 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:15.025111 kubelet[2266]: E0120 01:41:15.025062 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:15.028034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:15.028291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:15.108876 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 20 01:41:15.336092 (dockerd)[2278]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:41:15.336586 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:41:15.987027 dockerd[2278]: time="2026-01-20T01:41:15.986552220Z" level=info msg="Starting up" Jan 20 01:41:16.389604 update_engine[1710]: I20260120 01:41:16.388813 1710 update_attempter.cc:509] Updating boot flags... Jan 20 01:41:16.403139 dockerd[2278]: time="2026-01-20T01:41:16.401826690Z" level=info msg="Loading containers: start." Jan 20 01:41:16.434834 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2315) Jan 20 01:41:16.541534 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2316) Jan 20 01:41:16.624803 kernel: Initializing XFRM netlink socket Jan 20 01:41:16.767877 systemd-networkd[1356]: docker0: Link UP Jan 20 01:41:16.789365 dockerd[2278]: time="2026-01-20T01:41:16.788801433Z" level=info msg="Loading containers: done." Jan 20 01:41:16.810898 dockerd[2278]: time="2026-01-20T01:41:16.810852759Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:41:16.811161 dockerd[2278]: time="2026-01-20T01:41:16.811143759Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 01:41:16.811322 dockerd[2278]: time="2026-01-20T01:41:16.811308639Z" level=info msg="Daemon has completed initialization" Jan 20 01:41:16.863072 dockerd[2278]: time="2026-01-20T01:41:16.863007733Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:41:16.863728 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:41:17.611529 containerd[1737]: time="2026-01-20T01:41:17.611483372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 01:41:18.491163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883516515.mount: Deactivated successfully. Jan 20 01:41:19.970481 containerd[1737]: time="2026-01-20T01:41:19.970430915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:19.972745 containerd[1737]: time="2026-01-20T01:41:19.972511436Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 20 01:41:19.975778 containerd[1737]: time="2026-01-20T01:41:19.975732197Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:19.979934 containerd[1737]: time="2026-01-20T01:41:19.979877159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:19.981265 containerd[1737]: time="2026-01-20T01:41:19.981114759Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.369593507s" Jan 20 01:41:19.981265 containerd[1737]: time="2026-01-20T01:41:19.981148719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 20 01:41:19.982676 containerd[1737]: time="2026-01-20T01:41:19.982646200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 01:41:21.470819 containerd[1737]: time="2026-01-20T01:41:21.470688351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:21.472661 containerd[1737]: time="2026-01-20T01:41:21.472632152Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 20 01:41:21.476850 containerd[1737]: time="2026-01-20T01:41:21.476441873Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:21.480707 containerd[1737]: time="2026-01-20T01:41:21.480672515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:21.481855 containerd[1737]: time="2026-01-20T01:41:21.481823356Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.499059196s" Jan 20 01:41:21.481915 containerd[1737]: time="2026-01-20T01:41:21.481859156Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 20 01:41:21.482277 containerd[1737]: time="2026-01-20T01:41:21.482255876Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 01:41:22.764816 containerd[1737]: time="2026-01-20T01:41:22.764610026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:22.766832 containerd[1737]: time="2026-01-20T01:41:22.766792866Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 20 01:41:22.769508 containerd[1737]: time="2026-01-20T01:41:22.769444627Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:22.774278 containerd[1737]: time="2026-01-20T01:41:22.774249589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:22.775834 containerd[1737]: time="2026-01-20T01:41:22.775234790Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.292950874s" Jan 20 01:41:22.775834 containerd[1737]: time="2026-01-20T01:41:22.775266990Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 20 01:41:22.775834 containerd[1737]: time="2026-01-20T01:41:22.775648790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 01:41:23.825418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1557694606.mount: Deactivated successfully. Jan 20 01:41:24.139475 containerd[1737]: time="2026-01-20T01:41:24.139434132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:24.141356 containerd[1737]: time="2026-01-20T01:41:24.141331173Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 20 01:41:24.143289 containerd[1737]: time="2026-01-20T01:41:24.143263654Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:24.146289 containerd[1737]: time="2026-01-20T01:41:24.146244335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:24.146922 containerd[1737]: time="2026-01-20T01:41:24.146775935Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.371102625s" Jan 20 01:41:24.146922 containerd[1737]: time="2026-01-20T01:41:24.146817695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 20 01:41:24.147368 containerd[1737]: time="2026-01-20T01:41:24.147344175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 01:41:24.728435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149915336.mount: Deactivated successfully. Jan 20 01:41:25.139628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 01:41:25.146745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:25.684177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:25.696146 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:25.728121 kubelet[2606]: E0120 01:41:25.728065 2606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:25.731391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:25.731532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:26.500441 containerd[1737]: time="2026-01-20T01:41:26.499804230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:26.502393 containerd[1737]: time="2026-01-20T01:41:26.502169351Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 20 01:41:26.505378 containerd[1737]: time="2026-01-20T01:41:26.505336112Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:26.509692 containerd[1737]: time="2026-01-20T01:41:26.509647314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:26.510860 containerd[1737]: time="2026-01-20T01:41:26.510700835Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.36332474s" Jan 20 01:41:26.510860 containerd[1737]: time="2026-01-20T01:41:26.510733915Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 20 01:41:26.511983 containerd[1737]: time="2026-01-20T01:41:26.511860515Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 01:41:27.018487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247889040.mount: Deactivated successfully. Jan 20 01:41:27.037302 containerd[1737]: time="2026-01-20T01:41:27.037250740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:27.039435 containerd[1737]: time="2026-01-20T01:41:27.039267981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 20 01:41:27.041320 containerd[1737]: time="2026-01-20T01:41:27.041298582Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:27.045791 containerd[1737]: time="2026-01-20T01:41:27.045018785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:27.045791 containerd[1737]: time="2026-01-20T01:41:27.045679105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 533.78951ms" Jan 20 01:41:27.045791 containerd[1737]: time="2026-01-20T01:41:27.045705745Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 20 01:41:27.046261 containerd[1737]: time="2026-01-20T01:41:27.046239106Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 01:41:27.670847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352647243.mount: Deactivated successfully. Jan 20 01:41:31.278865 containerd[1737]: time="2026-01-20T01:41:31.278817003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:31.280745 containerd[1737]: time="2026-01-20T01:41:31.280512084Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 20 01:41:31.282869 containerd[1737]: time="2026-01-20T01:41:31.282819565Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:31.287629 containerd[1737]: time="2026-01-20T01:41:31.287585048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:31.288966 containerd[1737]: time="2026-01-20T01:41:31.288777929Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.242442983s" Jan 20 01:41:31.288966 containerd[1737]: time="2026-01-20T01:41:31.288821489Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 20 01:41:35.890368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 01:41:35.908147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:36.256999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:36.261147 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:36.293890 kubelet[2707]: E0120 01:41:36.293851 2707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:36.296809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:36.296946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:37.013726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:37.019046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:37.046901 systemd[1]: Reloading requested from client PID 2721 ('systemctl') (unit session-9.scope)... Jan 20 01:41:37.046917 systemd[1]: Reloading... Jan 20 01:41:37.148049 zram_generator::config[2770]: No configuration found. Jan 20 01:41:37.244637 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:41:37.321457 systemd[1]: Reloading finished in 274 ms. Jan 20 01:41:37.371601 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:41:37.371675 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:41:37.372057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:37.373774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:37.541913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:37.546030 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:41:37.583078 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:41:37.583078 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:41:37.583078 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:41:37.583078 kubelet[2828]: I0120 01:41:37.581616 2828 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:41:38.703801 kubelet[2828]: I0120 01:41:38.703620 2828 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 01:41:38.703801 kubelet[2828]: I0120 01:41:38.703658 2828 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:41:38.704300 kubelet[2828]: I0120 01:41:38.704284 2828 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:41:38.722103 kubelet[2828]: E0120 01:41:38.721922 2828 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:41:38.722103 kubelet[2828]: I0120 01:41:38.721971 2828 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:41:38.729466 kubelet[2828]: E0120 01:41:38.729429 2828 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 01:41:38.730804 kubelet[2828]: I0120 01:41:38.729608 2828 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 01:41:38.735121 kubelet[2828]: I0120 01:41:38.735098 2828 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:41:38.736452 kubelet[2828]: I0120 01:41:38.736421 2828 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:41:38.736707 kubelet[2828]: I0120 01:41:38.736526 2828 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-e5d82fe73a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:41:38.736867 kubelet[2828]: I0120 01:41:38.736855 2828 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:41:38.736928 kubelet[2828]: I0120 01:41:38.736920 2828 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 01:41:38.737090 kubelet[2828]: I0120 01:41:38.737079 2828 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:41:38.740036 kubelet[2828]: I0120 01:41:38.740018 2828 kubelet.go:480] "Attempting to sync node with API server" Jan 20 01:41:38.740123 kubelet[2828]: I0120 01:41:38.740113 2828 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:41:38.741313 kubelet[2828]: I0120 01:41:38.741298 2828 kubelet.go:386] "Adding apiserver pod source" Jan 20 01:41:38.742542 kubelet[2828]: I0120 01:41:38.742527 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:41:38.744706 kubelet[2828]: E0120 01:41:38.744674 2828 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-e5d82fe73a&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:41:38.745061 kubelet[2828]: E0120 01:41:38.745030 2828 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:41:38.745137 kubelet[2828]: I0120 01:41:38.745117 2828 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 01:41:38.745680 kubelet[2828]: I0120 01:41:38.745655 2828 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:41:38.745732 kubelet[2828]: W0120 01:41:38.745715 2828 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:41:38.749081 kubelet[2828]: I0120 01:41:38.749055 2828 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:41:38.749147 kubelet[2828]: I0120 01:41:38.749096 2828 server.go:1289] "Started kubelet" Jan 20 01:41:38.750848 kubelet[2828]: I0120 01:41:38.750141 2828 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:41:38.750929 kubelet[2828]: I0120 01:41:38.750864 2828 server.go:317] "Adding debug handlers to kubelet server" Jan 20 01:41:38.751811 kubelet[2828]: I0120 01:41:38.751578 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:41:38.752024 kubelet[2828]: I0120 01:41:38.752009 2828 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:41:38.753801 kubelet[2828]: E0120 01:41:38.752177 2828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-e5d82fe73a.188c4ce4b09ca202 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-e5d82fe73a,UID:ci-4081.3.6-n-e5d82fe73a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-e5d82fe73a,},FirstTimestamp:2026-01-20 01:41:38.749071874 +0000 UTC m=+1.199992042,LastTimestamp:2026-01-20 01:41:38.749071874 +0000 UTC m=+1.199992042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-e5d82fe73a,}" Jan 20 01:41:38.756552 kubelet[2828]: I0120 01:41:38.756092 2828 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:41:38.756861 kubelet[2828]: I0120 01:41:38.755806 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:41:38.758101 kubelet[2828]: E0120 01:41:38.758078 2828 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:41:38.758336 kubelet[2828]: I0120 01:41:38.758273 2828 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:41:38.758400 kubelet[2828]: I0120 01:41:38.758379 2828 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:41:38.758433 kubelet[2828]: I0120 01:41:38.758430 2828 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:41:38.758917 kubelet[2828]: E0120 01:41:38.758796 2828 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:41:38.759098 kubelet[2828]: E0120 01:41:38.759076 2828 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" Jan 20 01:41:38.759167 kubelet[2828]: E0120 01:41:38.759146 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e5d82fe73a?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="200ms" Jan 20 01:41:38.759298 kubelet[2828]: I0120 01:41:38.759280 2828 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:41:38.759373 kubelet[2828]: I0120 01:41:38.759356 2828 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:41:38.760057 kubelet[2828]: I0120 01:41:38.760038 2828 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:41:38.794811 kubelet[2828]: I0120 01:41:38.794760 2828 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 01:41:38.795755 kubelet[2828]: I0120 01:41:38.795725 2828 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 01:41:38.795755 kubelet[2828]: I0120 01:41:38.795748 2828 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 01:41:38.795859 kubelet[2828]: I0120 01:41:38.795767 2828 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:41:38.795859 kubelet[2828]: I0120 01:41:38.795774 2828 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 01:41:38.796081 kubelet[2828]: E0120 01:41:38.796056 2828 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:41:38.799105 kubelet[2828]: E0120 01:41:38.799061 2828 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:41:38.859401 kubelet[2828]: E0120 01:41:38.859367 2828 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" Jan 20 01:41:38.863890 kubelet[2828]: I0120 01:41:38.863857 2828 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:41:38.863890 kubelet[2828]: I0120 01:41:38.863877 2828 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:41:38.863890 kubelet[2828]: I0120 01:41:38.863894 2828 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:41:38.868433 kubelet[2828]: I0120 01:41:38.868407 2828 policy_none.go:49] "None policy: Start" Jan 20 01:41:38.868433 kubelet[2828]: I0120 01:41:38.868437 2828 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:41:38.868533 kubelet[2828]: I0120 01:41:38.868449 2828 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:41:38.896434 kubelet[2828]: E0120 01:41:38.896305 2828 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:41:38.899526 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:41:38.914372 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:41:38.917929 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:41:38.931740 kubelet[2828]: E0120 01:41:38.931534 2828 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:41:38.931740 kubelet[2828]: I0120 01:41:38.931741 2828 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:41:38.931900 kubelet[2828]: I0120 01:41:38.931752 2828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:41:38.932111 kubelet[2828]: I0120 01:41:38.932052 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:41:38.934143 kubelet[2828]: E0120 01:41:38.934118 2828 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:41:38.934242 kubelet[2828]: E0120 01:41:38.934157 2828 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-e5d82fe73a\" not found" Jan 20 01:41:38.960192 kubelet[2828]: E0120 01:41:38.960065 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e5d82fe73a?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="400ms" Jan 20 01:41:39.033620 kubelet[2828]: I0120 01:41:39.033589 2828 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.033969 kubelet[2828]: E0120 01:41:39.033945 2828 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.110174 systemd[1]: Created slice kubepods-burstable-pod76e66c9e36d613150900d83541aea68d.slice - libcontainer container kubepods-burstable-pod76e66c9e36d613150900d83541aea68d.slice. Jan 20 01:41:39.116072 kubelet[2828]: E0120 01:41:39.116040 2828 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.121177 systemd[1]: Created slice kubepods-burstable-pod79f48118115c4b4c348b8aa07491139a.slice - libcontainer container kubepods-burstable-pod79f48118115c4b4c348b8aa07491139a.slice. Jan 20 01:41:39.122981 kubelet[2828]: E0120 01:41:39.122958 2828 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.125089 systemd[1]: Created slice kubepods-burstable-podb31d278ac66fac8a4082e57a53a97b4d.slice - libcontainer container kubepods-burstable-podb31d278ac66fac8a4082e57a53a97b4d.slice. Jan 20 01:41:39.126651 kubelet[2828]: E0120 01:41:39.126502 2828 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.160755 kubelet[2828]: I0120 01:41:39.160729 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.160940 kubelet[2828]: I0120 01:41:39.160764 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.160940 kubelet[2828]: I0120 01:41:39.160795 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76e66c9e36d613150900d83541aea68d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e5d82fe73a\" (UID: \"76e66c9e36d613150900d83541aea68d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.160940 kubelet[2828]: I0120 01:41:39.160812 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76e66c9e36d613150900d83541aea68d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-e5d82fe73a\" (UID: \"76e66c9e36d613150900d83541aea68d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.160940 kubelet[2828]: I0120 01:41:39.160828 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.160940 kubelet[2828]: I0120 01:41:39.160846 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.161075 kubelet[2828]: I0120 01:41:39.160860 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b31d278ac66fac8a4082e57a53a97b4d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-e5d82fe73a\" (UID: \"b31d278ac66fac8a4082e57a53a97b4d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.161075 kubelet[2828]: I0120 01:41:39.160874 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76e66c9e36d613150900d83541aea68d-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e5d82fe73a\" (UID: \"76e66c9e36d613150900d83541aea68d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.161075 kubelet[2828]: I0120 01:41:39.160893 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.236191 kubelet[2828]: I0120 01:41:39.236098 2828 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.236715 kubelet[2828]: E0120 01:41:39.236387 2828 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.361247 kubelet[2828]: E0120 01:41:39.361209 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e5d82fe73a?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="800ms" Jan 20 01:41:39.417170 containerd[1737]: time="2026-01-20T01:41:39.417125680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-e5d82fe73a,Uid:76e66c9e36d613150900d83541aea68d,Namespace:kube-system,Attempt:0,}" Jan 20 01:41:39.423798 containerd[1737]: time="2026-01-20T01:41:39.423580042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-e5d82fe73a,Uid:79f48118115c4b4c348b8aa07491139a,Namespace:kube-system,Attempt:0,}" Jan 20 01:41:39.427482 containerd[1737]: time="2026-01-20T01:41:39.427456963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-e5d82fe73a,Uid:b31d278ac66fac8a4082e57a53a97b4d,Namespace:kube-system,Attempt:0,}" Jan 20 01:41:39.637996 kubelet[2828]: I0120 01:41:39.637902 2828 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.638277 kubelet[2828]: E0120 01:41:39.638251 2828 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:39.997932 kubelet[2828]: E0120 01:41:39.997889 2828 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-e5d82fe73a&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:41:40.011050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount483454292.mount: Deactivated successfully. Jan 20 01:41:40.029818 containerd[1737]: time="2026-01-20T01:41:40.029628425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:40.031534 containerd[1737]: time="2026-01-20T01:41:40.031499385Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 20 01:41:40.033683 containerd[1737]: time="2026-01-20T01:41:40.033645266Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:40.035813 containerd[1737]: time="2026-01-20T01:41:40.035738707Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:40.039274 containerd[1737]: time="2026-01-20T01:41:40.039239508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 01:41:40.041569 containerd[1737]: time="2026-01-20T01:41:40.041537029Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:40.045030 containerd[1737]: time="2026-01-20T01:41:40.044996830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 01:41:40.048472 containerd[1737]: time="2026-01-20T01:41:40.048429352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:40.049557 containerd[1737]: time="2026-01-20T01:41:40.049176992Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 631.977352ms" Jan 20 01:41:40.051004 containerd[1737]: time="2026-01-20T01:41:40.050974672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 627.33447ms" Jan 20 01:41:40.052507 containerd[1737]: time="2026-01-20T01:41:40.052477193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 624.96759ms" Jan 20 01:41:40.131939 kubelet[2828]: E0120 01:41:40.131903 2828 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:41:40.162007 kubelet[2828]: E0120 01:41:40.161971 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e5d82fe73a?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="1.6s" Jan 20 01:41:40.172543 kubelet[2828]: E0120 01:41:40.172504 2828 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:41:40.182101 kubelet[2828]: E0120 01:41:40.182060 2828 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:41:40.439801 kubelet[2828]: I0120 01:41:40.439690 2828 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:40.440029 kubelet[2828]: E0120 01:41:40.440004 2828 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:40.899882 kubelet[2828]: E0120 01:41:40.899825 2828 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:41:40.914644 containerd[1737]: time="2026-01-20T01:41:40.914567590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:41:40.914644 containerd[1737]: time="2026-01-20T01:41:40.914612750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:41:40.914644 containerd[1737]: time="2026-01-20T01:41:40.914626590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:40.915741 containerd[1737]: time="2026-01-20T01:41:40.915528470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:40.916589 containerd[1737]: time="2026-01-20T01:41:40.916242950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:41:40.916669 containerd[1737]: time="2026-01-20T01:41:40.916592591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:41:40.916740 containerd[1737]: time="2026-01-20T01:41:40.916713591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:40.917065 containerd[1737]: time="2026-01-20T01:41:40.917027751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:40.924841 containerd[1737]: time="2026-01-20T01:41:40.924628274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:41:40.925124 containerd[1737]: time="2026-01-20T01:41:40.925055234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:41:40.925124 containerd[1737]: time="2026-01-20T01:41:40.925081074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:40.925524 containerd[1737]: time="2026-01-20T01:41:40.925424034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:40.942086 systemd[1]: Started cri-containerd-95694d8c17b2f6df4da0a30b43f8b3ef492fe97b4daf718698c824b4045501fa.scope - libcontainer container 95694d8c17b2f6df4da0a30b43f8b3ef492fe97b4daf718698c824b4045501fa. Jan 20 01:41:40.947851 systemd[1]: Started cri-containerd-1ef6661ae16d32b3c0a1787c556bb69f163c393139009d7940e6de5784c7cd49.scope - libcontainer container 1ef6661ae16d32b3c0a1787c556bb69f163c393139009d7940e6de5784c7cd49. Jan 20 01:41:40.950062 systemd[1]: Started cri-containerd-496db12ce69ba6f95c01b85201181182d40735579124bd0e2e16ba73c2d59379.scope - libcontainer container 496db12ce69ba6f95c01b85201181182d40735579124bd0e2e16ba73c2d59379. Jan 20 01:41:40.993730 containerd[1737]: time="2026-01-20T01:41:40.993648099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-e5d82fe73a,Uid:b31d278ac66fac8a4082e57a53a97b4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"95694d8c17b2f6df4da0a30b43f8b3ef492fe97b4daf718698c824b4045501fa\"" Jan 20 01:41:41.012878 containerd[1737]: time="2026-01-20T01:41:41.012814146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-e5d82fe73a,Uid:76e66c9e36d613150900d83541aea68d,Namespace:kube-system,Attempt:0,} returns sandbox id \"496db12ce69ba6f95c01b85201181182d40735579124bd0e2e16ba73c2d59379\"" Jan 20 01:41:41.016218 containerd[1737]: time="2026-01-20T01:41:41.016173267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-e5d82fe73a,Uid:79f48118115c4b4c348b8aa07491139a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ef6661ae16d32b3c0a1787c556bb69f163c393139009d7940e6de5784c7cd49\"" Jan 20 01:41:41.067069 containerd[1737]: time="2026-01-20T01:41:41.066950046Z" level=info msg="CreateContainer within sandbox \"95694d8c17b2f6df4da0a30b43f8b3ef492fe97b4daf718698c824b4045501fa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:41:41.208435 containerd[1737]: time="2026-01-20T01:41:41.207727898Z" level=info msg="CreateContainer within sandbox \"496db12ce69ba6f95c01b85201181182d40735579124bd0e2e16ba73c2d59379\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:41:41.323483 containerd[1737]: time="2026-01-20T01:41:41.323442060Z" level=info msg="CreateContainer within sandbox \"1ef6661ae16d32b3c0a1787c556bb69f163c393139009d7940e6de5784c7cd49\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:41:41.372757 containerd[1737]: time="2026-01-20T01:41:41.372313918Z" level=info msg="CreateContainer within sandbox \"95694d8c17b2f6df4da0a30b43f8b3ef492fe97b4daf718698c824b4045501fa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a462da01c2f5fc600a68057ca94a2731c6f43999fc3cf7a0eeadb691809d81ce\"" Jan 20 01:41:41.373180 containerd[1737]: time="2026-01-20T01:41:41.373157198Z" level=info msg="StartContainer for \"a462da01c2f5fc600a68057ca94a2731c6f43999fc3cf7a0eeadb691809d81ce\"" Jan 20 01:41:41.376990 containerd[1737]: time="2026-01-20T01:41:41.376955080Z" level=info msg="CreateContainer within sandbox \"496db12ce69ba6f95c01b85201181182d40735579124bd0e2e16ba73c2d59379\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"628fa6b47b38638235eba87fcac1642b00e1ae17b0cf344490d752568cee29f8\"" Jan 20 01:41:41.377592 containerd[1737]: time="2026-01-20T01:41:41.377567600Z" level=info msg="StartContainer for \"628fa6b47b38638235eba87fcac1642b00e1ae17b0cf344490d752568cee29f8\"" Jan 20 01:41:41.380845 containerd[1737]: time="2026-01-20T01:41:41.380774641Z" level=info msg="CreateContainer within sandbox \"1ef6661ae16d32b3c0a1787c556bb69f163c393139009d7940e6de5784c7cd49\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8d947d13d29edcf460a098b876ee75e31e335be9f6dced209637156d78900bcb\"" Jan 20 01:41:41.382312 containerd[1737]: time="2026-01-20T01:41:41.381264881Z" level=info msg="StartContainer for \"8d947d13d29edcf460a098b876ee75e31e335be9f6dced209637156d78900bcb\"" Jan 20 01:41:41.412967 systemd[1]: Started cri-containerd-628fa6b47b38638235eba87fcac1642b00e1ae17b0cf344490d752568cee29f8.scope - libcontainer container 628fa6b47b38638235eba87fcac1642b00e1ae17b0cf344490d752568cee29f8. Jan 20 01:41:41.413847 systemd[1]: Started cri-containerd-8d947d13d29edcf460a098b876ee75e31e335be9f6dced209637156d78900bcb.scope - libcontainer container 8d947d13d29edcf460a098b876ee75e31e335be9f6dced209637156d78900bcb. Jan 20 01:41:41.414644 systemd[1]: Started cri-containerd-a462da01c2f5fc600a68057ca94a2731c6f43999fc3cf7a0eeadb691809d81ce.scope - libcontainer container a462da01c2f5fc600a68057ca94a2731c6f43999fc3cf7a0eeadb691809d81ce. Jan 20 01:41:41.460313 containerd[1737]: time="2026-01-20T01:41:41.460205990Z" level=info msg="StartContainer for \"a462da01c2f5fc600a68057ca94a2731c6f43999fc3cf7a0eeadb691809d81ce\" returns successfully" Jan 20 01:41:41.476533 containerd[1737]: time="2026-01-20T01:41:41.476492716Z" level=info msg="StartContainer for \"628fa6b47b38638235eba87fcac1642b00e1ae17b0cf344490d752568cee29f8\" returns successfully" Jan 20 01:41:41.476808 containerd[1737]: time="2026-01-20T01:41:41.476492676Z" level=info msg="StartContainer for \"8d947d13d29edcf460a098b876ee75e31e335be9f6dced209637156d78900bcb\" returns successfully" Jan 20 01:41:41.808858 kubelet[2828]: E0120 01:41:41.808601 2828 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:41.818856 kubelet[2828]: E0120 01:41:41.815924 2828 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:41.819533 kubelet[2828]: E0120 01:41:41.819501 2828 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:42.042300 kubelet[2828]: I0120 01:41:42.041992 2828 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:42.820301 kubelet[2828]: E0120 01:41:42.820117 2828 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:42.821932 kubelet[2828]: E0120 01:41:42.821788 2828 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:43.611643 kubelet[2828]: E0120 01:41:43.611595 2828 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-e5d82fe73a\" not found" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:43.717534 kubelet[2828]: I0120 01:41:43.717497 2828 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:43.747323 kubelet[2828]: I0120 01:41:43.747126 2828 apiserver.go:52] "Watching apiserver" Jan 20 01:41:43.759082 kubelet[2828]: I0120 01:41:43.759040 2828 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:41:43.760113 kubelet[2828]: I0120 01:41:43.760090 2828 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:43.774956 kubelet[2828]: E0120 01:41:43.774927 2828 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-e5d82fe73a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:43.774956 kubelet[2828]: I0120 01:41:43.774953 2828 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:43.776525 kubelet[2828]: E0120 01:41:43.776410 2828 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:43.776525 kubelet[2828]: I0120 01:41:43.776432 2828 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:43.779788 kubelet[2828]: E0120 01:41:43.777680 2828 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-e5d82fe73a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:45.606275 systemd[1]: Reloading requested from client PID 3113 ('systemctl') (unit session-9.scope)... Jan 20 01:41:45.606288 systemd[1]: Reloading... Jan 20 01:41:45.661923 zram_generator::config[3149]: No configuration found. Jan 20 01:41:45.822250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:41:45.913816 systemd[1]: Reloading finished in 307 ms. Jan 20 01:41:45.948238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:45.961810 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:41:45.962039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:45.962091 systemd[1]: kubelet.service: Consumed 1.535s CPU time, 129.3M memory peak, 0B memory swap peak. Jan 20 01:41:45.968007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:46.295495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:46.295838 (kubelet)[3217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:41:46.340586 kubelet[3217]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:41:46.340586 kubelet[3217]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:41:46.340586 kubelet[3217]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:41:46.340985 kubelet[3217]: I0120 01:41:46.340624 3217 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:41:46.347246 kubelet[3217]: I0120 01:41:46.347218 3217 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 01:41:46.348745 kubelet[3217]: I0120 01:41:46.347391 3217 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:41:46.348745 kubelet[3217]: I0120 01:41:46.347612 3217 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:41:46.348904 kubelet[3217]: I0120 01:41:46.348892 3217 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 01:41:46.352143 kubelet[3217]: I0120 01:41:46.352111 3217 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:41:46.358009 kubelet[3217]: E0120 01:41:46.357980 3217 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 01:41:46.358118 kubelet[3217]: I0120 01:41:46.358106 3217 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 01:41:46.362487 kubelet[3217]: I0120 01:41:46.362468 3217 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:41:46.362970 kubelet[3217]: I0120 01:41:46.362942 3217 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:41:46.363260 kubelet[3217]: I0120 01:41:46.363043 3217 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-e5d82fe73a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:41:46.363401 kubelet[3217]: I0120 01:41:46.363388 3217 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:41:46.363452 kubelet[3217]: I0120 01:41:46.363445 3217 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 01:41:46.363541 kubelet[3217]: I0120 01:41:46.363533 3217 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:41:46.363750 kubelet[3217]: I0120 01:41:46.363741 3217 kubelet.go:480] "Attempting to sync node with API server" Jan 20 01:41:46.364219 kubelet[3217]: I0120 01:41:46.364207 3217 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:41:46.364346 kubelet[3217]: I0120 01:41:46.364335 3217 kubelet.go:386] "Adding apiserver pod source" Jan 20 01:41:46.364405 kubelet[3217]: I0120 01:41:46.364397 3217 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:41:46.367737 kubelet[3217]: I0120 01:41:46.367718 3217 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 01:41:46.368385 kubelet[3217]: I0120 01:41:46.368366 3217 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:41:46.371678 kubelet[3217]: I0120 01:41:46.371659 3217 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:41:46.371750 kubelet[3217]: I0120 01:41:46.371698 3217 server.go:1289] "Started kubelet" Jan 20 01:41:46.371849 kubelet[3217]: I0120 01:41:46.371824 3217 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:41:46.371955 kubelet[3217]: I0120 01:41:46.371914 3217 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:41:46.372173 kubelet[3217]: I0120 01:41:46.372149 3217 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:41:46.374806 kubelet[3217]: I0120 01:41:46.373422 3217 server.go:317] "Adding debug handlers to kubelet server" Jan 20 01:41:46.379791 kubelet[3217]: I0120 01:41:46.378278 3217 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:41:46.389021 kubelet[3217]: I0120 01:41:46.388994 3217 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:41:46.390177 kubelet[3217]: I0120 01:41:46.390160 3217 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:41:46.390378 kubelet[3217]: E0120 01:41:46.390361 3217 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-e5d82fe73a\" not found" Jan 20 01:41:46.393665 kubelet[3217]: I0120 01:41:46.393643 3217 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:41:46.394169 kubelet[3217]: I0120 01:41:46.393779 3217 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:41:46.411749 kubelet[3217]: I0120 01:41:46.411719 3217 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:41:46.411972 kubelet[3217]: I0120 01:41:46.411930 3217 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 01:41:46.412075 kubelet[3217]: I0120 01:41:46.412058 3217 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:41:46.414647 kubelet[3217]: I0120 01:41:46.414620 3217 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 01:41:46.414647 kubelet[3217]: I0120 01:41:46.414643 3217 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 01:41:46.414756 kubelet[3217]: I0120 01:41:46.414663 3217 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:41:46.414756 kubelet[3217]: I0120 01:41:46.414669 3217 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 01:41:46.414756 kubelet[3217]: E0120 01:41:46.414703 3217 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:41:46.415076 kubelet[3217]: I0120 01:41:46.415057 3217 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:41:46.443568 kubelet[3217]: E0120 01:41:46.443535 3217 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:41:46.468367 kubelet[3217]: I0120 01:41:46.468341 3217 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:41:46.468367 kubelet[3217]: I0120 01:41:46.468357 3217 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:41:46.468367 kubelet[3217]: I0120 01:41:46.468378 3217 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:41:46.468534 kubelet[3217]: I0120 01:41:46.468503 3217 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:41:46.468534 kubelet[3217]: I0120 01:41:46.468515 3217 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:41:46.468534 kubelet[3217]: I0120 01:41:46.468531 3217 policy_none.go:49] "None policy: Start" Jan 20 01:41:46.468594 kubelet[3217]: I0120 01:41:46.468539 3217 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:41:46.468594 kubelet[3217]: I0120 01:41:46.468546 3217 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:41:46.468634 kubelet[3217]: I0120 01:41:46.468623 3217 state_mem.go:75] "Updated machine memory state" Jan 20 01:41:46.472879 kubelet[3217]: E0120 01:41:46.472062 3217 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:41:46.472879 kubelet[3217]: I0120 01:41:46.472211 3217 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:41:46.472879 kubelet[3217]: I0120 01:41:46.472221 3217 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:41:46.472879 kubelet[3217]: I0120 01:41:46.472647 3217 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:41:46.474619 kubelet[3217]: E0120 01:41:46.474601 3217 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:41:46.516259 kubelet[3217]: I0120 01:41:46.515903 3217 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.516259 kubelet[3217]: I0120 01:41:46.515957 3217 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.516259 kubelet[3217]: I0120 01:41:46.516210 3217 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.529912 kubelet[3217]: I0120 01:41:46.529804 3217 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:41:46.531481 kubelet[3217]: I0120 01:41:46.531165 3217 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:41:46.531779 kubelet[3217]: I0120 01:41:46.531761 3217 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:41:46.574353 kubelet[3217]: I0120 01:41:46.574267 3217 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.596274 kubelet[3217]: I0120 01:41:46.595991 3217 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.596274 kubelet[3217]: I0120 01:41:46.596071 3217 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.696135 kubelet[3217]: I0120 01:41:46.696090 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b31d278ac66fac8a4082e57a53a97b4d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-e5d82fe73a\" (UID: \"b31d278ac66fac8a4082e57a53a97b4d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.696135 kubelet[3217]: I0120 01:41:46.696131 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76e66c9e36d613150900d83541aea68d-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e5d82fe73a\" (UID: \"76e66c9e36d613150900d83541aea68d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.696312 kubelet[3217]: I0120 01:41:46.696152 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76e66c9e36d613150900d83541aea68d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-e5d82fe73a\" (UID: \"76e66c9e36d613150900d83541aea68d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.696312 kubelet[3217]: I0120 01:41:46.696170 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.696312 kubelet[3217]: I0120 01:41:46.696188 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.696312 kubelet[3217]: I0120 01:41:46.696203 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76e66c9e36d613150900d83541aea68d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e5d82fe73a\" (UID: \"76e66c9e36d613150900d83541aea68d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.696312 kubelet[3217]: I0120 01:41:46.696217 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.696423 kubelet[3217]: I0120 01:41:46.696232 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:46.696423 kubelet[3217]: I0120 01:41:46.696247 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79f48118115c4b4c348b8aa07491139a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-e5d82fe73a\" (UID: \"79f48118115c4b4c348b8aa07491139a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:47.365103 kubelet[3217]: I0120 01:41:47.364845 3217 apiserver.go:52] "Watching apiserver" Jan 20 01:41:47.394070 kubelet[3217]: I0120 01:41:47.394001 3217 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:41:47.457489 kubelet[3217]: I0120 01:41:47.457453 3217 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:47.457705 kubelet[3217]: I0120 01:41:47.457686 3217 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:47.472074 kubelet[3217]: I0120 01:41:47.472026 3217 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:41:47.472222 kubelet[3217]: E0120 01:41:47.472094 3217 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-e5d82fe73a\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:47.475808 kubelet[3217]: I0120 01:41:47.473655 3217 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:41:47.475808 kubelet[3217]: E0120 01:41:47.473705 3217 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-e5d82fe73a\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" Jan 20 01:41:47.508561 kubelet[3217]: I0120 01:41:47.508497 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e5d82fe73a" podStartSLOduration=1.508479575 podStartE2EDuration="1.508479575s" podCreationTimestamp="2026-01-20 01:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:41:47.487918733 +0000 UTC m=+1.185085494" watchObservedRunningTime="2026-01-20 01:41:47.508479575 +0000 UTC m=+1.205646376" Jan 20 01:41:47.520687 kubelet[3217]: I0120 01:41:47.520622 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e5d82fe73a" podStartSLOduration=1.5206051760000001 podStartE2EDuration="1.520605176s" podCreationTimestamp="2026-01-20 01:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:41:47.508804975 +0000 UTC m=+1.205971776" watchObservedRunningTime="2026-01-20 01:41:47.520605176 +0000 UTC m=+1.217771937" Jan 20 01:41:47.534544 kubelet[3217]: I0120 01:41:47.534494 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e5d82fe73a" podStartSLOduration=1.534476138 podStartE2EDuration="1.534476138s" podCreationTimestamp="2026-01-20 01:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:41:47.521289176 +0000 UTC m=+1.218455977" watchObservedRunningTime="2026-01-20 01:41:47.534476138 +0000 UTC m=+1.231642899" Jan 20 01:41:52.263499 kubelet[3217]: I0120 01:41:52.263455 3217 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:41:52.264976 containerd[1737]: time="2026-01-20T01:41:52.264211066Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:41:52.265247 kubelet[3217]: I0120 01:41:52.264761 3217 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:41:53.432334 kubelet[3217]: I0120 01:41:53.432300 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f96aa02-c852-4ae7-89a6-2b2132b0d8f3-xtables-lock\") pod \"kube-proxy-8q8f6\" (UID: \"9f96aa02-c852-4ae7-89a6-2b2132b0d8f3\") " pod="kube-system/kube-proxy-8q8f6" Jan 20 01:41:53.432334 kubelet[3217]: I0120 01:41:53.432334 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f96aa02-c852-4ae7-89a6-2b2132b0d8f3-lib-modules\") pod \"kube-proxy-8q8f6\" (UID: \"9f96aa02-c852-4ae7-89a6-2b2132b0d8f3\") " pod="kube-system/kube-proxy-8q8f6" Jan 20 01:41:53.433660 kubelet[3217]: I0120 01:41:53.432356 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mfm7\" (UniqueName: \"kubernetes.io/projected/9f96aa02-c852-4ae7-89a6-2b2132b0d8f3-kube-api-access-5mfm7\") pod \"kube-proxy-8q8f6\" (UID: \"9f96aa02-c852-4ae7-89a6-2b2132b0d8f3\") " pod="kube-system/kube-proxy-8q8f6" Jan 20 01:41:53.433660 kubelet[3217]: I0120 01:41:53.432378 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f96aa02-c852-4ae7-89a6-2b2132b0d8f3-kube-proxy\") pod \"kube-proxy-8q8f6\" (UID: \"9f96aa02-c852-4ae7-89a6-2b2132b0d8f3\") " pod="kube-system/kube-proxy-8q8f6" Jan 20 01:41:53.432909 systemd[1]: Created slice kubepods-besteffort-pod9f96aa02_c852_4ae7_89a6_2b2132b0d8f3.slice - libcontainer container kubepods-besteffort-pod9f96aa02_c852_4ae7_89a6_2b2132b0d8f3.slice. Jan 20 01:41:53.510580 systemd[1]: Created slice kubepods-besteffort-pod32785a1c_45bc_42fb_bc88_651df8209e32.slice - libcontainer container kubepods-besteffort-pod32785a1c_45bc_42fb_bc88_651df8209e32.slice. Jan 20 01:41:53.533379 kubelet[3217]: I0120 01:41:53.533110 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32785a1c-45bc-42fb-bc88-651df8209e32-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kw8js\" (UID: \"32785a1c-45bc-42fb-bc88-651df8209e32\") " pod="tigera-operator/tigera-operator-7dcd859c48-kw8js" Jan 20 01:41:53.533379 kubelet[3217]: I0120 01:41:53.533167 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46lll\" (UniqueName: \"kubernetes.io/projected/32785a1c-45bc-42fb-bc88-651df8209e32-kube-api-access-46lll\") pod \"tigera-operator-7dcd859c48-kw8js\" (UID: \"32785a1c-45bc-42fb-bc88-651df8209e32\") " pod="tigera-operator/tigera-operator-7dcd859c48-kw8js" Jan 20 01:41:53.740434 containerd[1737]: time="2026-01-20T01:41:53.740010374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8q8f6,Uid:9f96aa02-c852-4ae7-89a6-2b2132b0d8f3,Namespace:kube-system,Attempt:0,}" Jan 20 01:41:53.771367 containerd[1737]: time="2026-01-20T01:41:53.771265109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:41:53.771654 containerd[1737]: time="2026-01-20T01:41:53.771399030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:41:53.771654 containerd[1737]: time="2026-01-20T01:41:53.771432910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:53.771921 containerd[1737]: time="2026-01-20T01:41:53.771882310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:53.795994 systemd[1]: Started cri-containerd-03d3d887626b1287f9a6e032da8c23ca7dea1b68ec954d13817e22a78d234e52.scope - libcontainer container 03d3d887626b1287f9a6e032da8c23ca7dea1b68ec954d13817e22a78d234e52. Jan 20 01:41:53.815162 containerd[1737]: time="2026-01-20T01:41:53.814953732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kw8js,Uid:32785a1c-45bc-42fb-bc88-651df8209e32,Namespace:tigera-operator,Attempt:0,}" Jan 20 01:41:53.816748 containerd[1737]: time="2026-01-20T01:41:53.816352492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8q8f6,Uid:9f96aa02-c852-4ae7-89a6-2b2132b0d8f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"03d3d887626b1287f9a6e032da8c23ca7dea1b68ec954d13817e22a78d234e52\"" Jan 20 01:41:53.826203 containerd[1737]: time="2026-01-20T01:41:53.826164657Z" level=info msg="CreateContainer within sandbox \"03d3d887626b1287f9a6e032da8c23ca7dea1b68ec954d13817e22a78d234e52\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:41:53.861608 containerd[1737]: time="2026-01-20T01:41:53.861330715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:41:53.861608 containerd[1737]: time="2026-01-20T01:41:53.861469635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:41:53.861608 containerd[1737]: time="2026-01-20T01:41:53.861481675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:53.861966 containerd[1737]: time="2026-01-20T01:41:53.861806675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:53.877959 containerd[1737]: time="2026-01-20T01:41:53.877916963Z" level=info msg="CreateContainer within sandbox \"03d3d887626b1287f9a6e032da8c23ca7dea1b68ec954d13817e22a78d234e52\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95ba34f924e73b36cbb5855f2b9b0f9f94b94aa1f63c87416d88c1e9b46ae1b0\"" Jan 20 01:41:53.879588 containerd[1737]: time="2026-01-20T01:41:53.878440804Z" level=info msg="StartContainer for \"95ba34f924e73b36cbb5855f2b9b0f9f94b94aa1f63c87416d88c1e9b46ae1b0\"" Jan 20 01:41:53.879314 systemd[1]: Started cri-containerd-dba51662ef858094e4fcbefb3b8f3479ec59a2ff22cb8ad591286a2f464a7a2a.scope - libcontainer container dba51662ef858094e4fcbefb3b8f3479ec59a2ff22cb8ad591286a2f464a7a2a. Jan 20 01:41:53.909981 systemd[1]: Started cri-containerd-95ba34f924e73b36cbb5855f2b9b0f9f94b94aa1f63c87416d88c1e9b46ae1b0.scope - libcontainer container 95ba34f924e73b36cbb5855f2b9b0f9f94b94aa1f63c87416d88c1e9b46ae1b0. Jan 20 01:41:53.922635 containerd[1737]: time="2026-01-20T01:41:53.922347946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kw8js,Uid:32785a1c-45bc-42fb-bc88-651df8209e32,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dba51662ef858094e4fcbefb3b8f3479ec59a2ff22cb8ad591286a2f464a7a2a\"" Jan 20 01:41:53.928025 containerd[1737]: time="2026-01-20T01:41:53.927957869Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 01:41:53.952184 containerd[1737]: time="2026-01-20T01:41:53.951966881Z" level=info msg="StartContainer for \"95ba34f924e73b36cbb5855f2b9b0f9f94b94aa1f63c87416d88c1e9b46ae1b0\" returns successfully" Jan 20 01:41:54.479949 kubelet[3217]: I0120 01:41:54.479384 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8q8f6" podStartSLOduration=1.479368988 podStartE2EDuration="1.479368988s" podCreationTimestamp="2026-01-20 01:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:41:54.479134348 +0000 UTC m=+8.176301149" watchObservedRunningTime="2026-01-20 01:41:54.479368988 +0000 UTC m=+8.176535789" Jan 20 01:41:56.161291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960117699.mount: Deactivated successfully. Jan 20 01:41:56.788648 containerd[1737]: time="2026-01-20T01:41:56.788602318Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:56.791467 containerd[1737]: time="2026-01-20T01:41:56.791439119Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 20 01:41:56.795539 containerd[1737]: time="2026-01-20T01:41:56.795509481Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:56.800072 containerd[1737]: time="2026-01-20T01:41:56.800036083Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:56.801273 containerd[1737]: time="2026-01-20T01:41:56.800904564Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.872909575s" Jan 20 01:41:56.801273 containerd[1737]: time="2026-01-20T01:41:56.800940684Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 20 01:41:56.807695 containerd[1737]: time="2026-01-20T01:41:56.807658167Z" level=info msg="CreateContainer within sandbox \"dba51662ef858094e4fcbefb3b8f3479ec59a2ff22cb8ad591286a2f464a7a2a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 01:41:56.823893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68358669.mount: Deactivated successfully. Jan 20 01:41:56.828400 containerd[1737]: time="2026-01-20T01:41:56.828325658Z" level=info msg="CreateContainer within sandbox \"dba51662ef858094e4fcbefb3b8f3479ec59a2ff22cb8ad591286a2f464a7a2a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"10f8064d6832975294159aebc1832d89200decce275354317911e556bb05a117\"" Jan 20 01:41:56.831005 containerd[1737]: time="2026-01-20T01:41:56.829522458Z" level=info msg="StartContainer for \"10f8064d6832975294159aebc1832d89200decce275354317911e556bb05a117\"" Jan 20 01:41:56.856939 systemd[1]: Started cri-containerd-10f8064d6832975294159aebc1832d89200decce275354317911e556bb05a117.scope - libcontainer container 10f8064d6832975294159aebc1832d89200decce275354317911e556bb05a117. Jan 20 01:41:56.881698 containerd[1737]: time="2026-01-20T01:41:56.881658365Z" level=info msg="StartContainer for \"10f8064d6832975294159aebc1832d89200decce275354317911e556bb05a117\" returns successfully" Jan 20 01:42:00.024252 kubelet[3217]: I0120 01:42:00.024197 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kw8js" podStartSLOduration=4.14948359 podStartE2EDuration="7.024183166s" podCreationTimestamp="2026-01-20 01:41:53 +0000 UTC" firstStartedPulling="2026-01-20 01:41:53.927014228 +0000 UTC m=+7.624181029" lastFinishedPulling="2026-01-20 01:41:56.801713804 +0000 UTC m=+10.498880605" observedRunningTime="2026-01-20 01:41:57.486679284 +0000 UTC m=+11.183846085" watchObservedRunningTime="2026-01-20 01:42:00.024183166 +0000 UTC m=+13.721349967" Jan 20 01:42:02.703978 sudo[2248]: pam_unix(sudo:session): session closed for user root Jan 20 01:42:02.781150 sshd[2245]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:02.787664 systemd[1]: sshd@6-10.200.20.17:22-10.200.16.10:34540.service: Deactivated successfully. Jan 20 01:42:02.794111 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:42:02.794701 systemd[1]: session-9.scope: Consumed 6.893s CPU time, 152.7M memory peak, 0B memory swap peak. Jan 20 01:42:02.797816 systemd-logind[1707]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:42:02.798997 systemd-logind[1707]: Removed session 9. Jan 20 01:42:12.845250 systemd[1]: Created slice kubepods-besteffort-podf950ebe4_d27e_4c56_ac80_4a47502671b5.slice - libcontainer container kubepods-besteffort-podf950ebe4_d27e_4c56_ac80_4a47502671b5.slice. Jan 20 01:42:12.943270 kubelet[3217]: I0120 01:42:12.943225 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f950ebe4-d27e-4c56-ac80-4a47502671b5-typha-certs\") pod \"calico-typha-5f64486dc8-9zqw7\" (UID: \"f950ebe4-d27e-4c56-ac80-4a47502671b5\") " pod="calico-system/calico-typha-5f64486dc8-9zqw7" Jan 20 01:42:12.943270 kubelet[3217]: I0120 01:42:12.943272 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9pr\" (UniqueName: \"kubernetes.io/projected/f950ebe4-d27e-4c56-ac80-4a47502671b5-kube-api-access-rh9pr\") pod \"calico-typha-5f64486dc8-9zqw7\" (UID: \"f950ebe4-d27e-4c56-ac80-4a47502671b5\") " pod="calico-system/calico-typha-5f64486dc8-9zqw7" Jan 20 01:42:12.943660 kubelet[3217]: I0120 01:42:12.943301 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f950ebe4-d27e-4c56-ac80-4a47502671b5-tigera-ca-bundle\") pod \"calico-typha-5f64486dc8-9zqw7\" (UID: \"f950ebe4-d27e-4c56-ac80-4a47502671b5\") " pod="calico-system/calico-typha-5f64486dc8-9zqw7" Jan 20 01:42:13.070142 systemd[1]: Created slice kubepods-besteffort-pod62ce4b8e_7632_4ac8_a62b_7cb6982b4e5d.slice - libcontainer container kubepods-besteffort-pod62ce4b8e_7632_4ac8_a62b_7cb6982b4e5d.slice. Jan 20 01:42:13.144948 kubelet[3217]: I0120 01:42:13.144915 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-flexvol-driver-host\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145069 kubelet[3217]: I0120 01:42:13.144956 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-lib-modules\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145069 kubelet[3217]: I0120 01:42:13.144974 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-policysync\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145069 kubelet[3217]: I0120 01:42:13.144990 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-cni-bin-dir\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145069 kubelet[3217]: I0120 01:42:13.145031 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-node-certs\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145069 kubelet[3217]: I0120 01:42:13.145046 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-var-lib-calico\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145187 kubelet[3217]: I0120 01:42:13.145059 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-var-run-calico\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145187 kubelet[3217]: I0120 01:42:13.145074 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-tigera-ca-bundle\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145187 kubelet[3217]: I0120 01:42:13.145089 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-cni-net-dir\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145187 kubelet[3217]: I0120 01:42:13.145105 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vwhc\" (UniqueName: \"kubernetes.io/projected/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-kube-api-access-6vwhc\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145187 kubelet[3217]: I0120 01:42:13.145119 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-cni-log-dir\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.145291 kubelet[3217]: I0120 01:42:13.145133 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d-xtables-lock\") pod \"calico-node-wvjz7\" (UID: \"62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d\") " pod="calico-system/calico-node-wvjz7" Jan 20 01:42:13.148841 containerd[1737]: time="2026-01-20T01:42:13.148790386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f64486dc8-9zqw7,Uid:f950ebe4-d27e-4c56-ac80-4a47502671b5,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:13.183434 containerd[1737]: time="2026-01-20T01:42:13.183332197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:13.183434 containerd[1737]: time="2026-01-20T01:42:13.183386557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:13.183434 containerd[1737]: time="2026-01-20T01:42:13.183414077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:13.184251 containerd[1737]: time="2026-01-20T01:42:13.184193238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:13.206017 systemd[1]: Started cri-containerd-15cc09e45dcb8296aae57327d57e6aafec74166e37f95bc07d14f409f112975d.scope - libcontainer container 15cc09e45dcb8296aae57327d57e6aafec74166e37f95bc07d14f409f112975d. Jan 20 01:42:13.246716 containerd[1737]: time="2026-01-20T01:42:13.246554378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f64486dc8-9zqw7,Uid:f950ebe4-d27e-4c56-ac80-4a47502671b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"15cc09e45dcb8296aae57327d57e6aafec74166e37f95bc07d14f409f112975d\"" Jan 20 01:42:13.249220 kubelet[3217]: E0120 01:42:13.249011 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.249220 kubelet[3217]: W0120 01:42:13.249036 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.249220 kubelet[3217]: E0120 01:42:13.249058 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.249220 kubelet[3217]: E0120 01:42:13.249206 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.249220 kubelet[3217]: W0120 01:42:13.249212 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.249220 kubelet[3217]: E0120 01:42:13.249220 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.249431 kubelet[3217]: E0120 01:42:13.249336 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.249431 kubelet[3217]: W0120 01:42:13.249342 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.249431 kubelet[3217]: E0120 01:42:13.249351 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.250398 kubelet[3217]: E0120 01:42:13.249525 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.250398 kubelet[3217]: W0120 01:42:13.249539 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.250398 kubelet[3217]: E0120 01:42:13.249548 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.250398 kubelet[3217]: E0120 01:42:13.249686 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.250398 kubelet[3217]: W0120 01:42:13.249694 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.250398 kubelet[3217]: E0120 01:42:13.249701 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.250398 kubelet[3217]: E0120 01:42:13.249861 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.250398 kubelet[3217]: W0120 01:42:13.249870 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.250398 kubelet[3217]: E0120 01:42:13.249886 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.250398 kubelet[3217]: E0120 01:42:13.250149 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.251173 kubelet[3217]: W0120 01:42:13.250160 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.251173 kubelet[3217]: E0120 01:42:13.250170 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.251173 kubelet[3217]: E0120 01:42:13.250517 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.251173 kubelet[3217]: W0120 01:42:13.250527 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.251173 kubelet[3217]: E0120 01:42:13.250538 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.252831 kubelet[3217]: E0120 01:42:13.251300 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.252831 kubelet[3217]: W0120 01:42:13.251322 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.252831 kubelet[3217]: E0120 01:42:13.251335 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.252831 kubelet[3217]: E0120 01:42:13.251495 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.252831 kubelet[3217]: W0120 01:42:13.251502 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.252831 kubelet[3217]: E0120 01:42:13.251512 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.252831 kubelet[3217]: E0120 01:42:13.251621 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.252831 kubelet[3217]: W0120 01:42:13.251627 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.252831 kubelet[3217]: E0120 01:42:13.251633 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.252831 kubelet[3217]: E0120 01:42:13.251990 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.253078 kubelet[3217]: W0120 01:42:13.252000 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.253078 kubelet[3217]: E0120 01:42:13.252011 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.253078 kubelet[3217]: E0120 01:42:13.252476 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.253078 kubelet[3217]: W0120 01:42:13.252492 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.253078 kubelet[3217]: E0120 01:42:13.252506 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.253078 kubelet[3217]: E0120 01:42:13.253065 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.253078 kubelet[3217]: W0120 01:42:13.253077 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.253211 kubelet[3217]: E0120 01:42:13.253089 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.253514 kubelet[3217]: E0120 01:42:13.253416 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.253514 kubelet[3217]: W0120 01:42:13.253431 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.253514 kubelet[3217]: E0120 01:42:13.253442 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.255065 kubelet[3217]: E0120 01:42:13.254618 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.255065 kubelet[3217]: W0120 01:42:13.254634 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.255065 kubelet[3217]: E0120 01:42:13.254647 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.255149 containerd[1737]: time="2026-01-20T01:42:13.253754901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 01:42:13.255198 kubelet[3217]: E0120 01:42:13.255083 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.255198 kubelet[3217]: W0120 01:42:13.255094 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.255198 kubelet[3217]: E0120 01:42:13.255105 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.256114 kubelet[3217]: E0120 01:42:13.255612 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.256114 kubelet[3217]: W0120 01:42:13.255631 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.256114 kubelet[3217]: E0120 01:42:13.255643 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.256114 kubelet[3217]: E0120 01:42:13.256094 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.256114 kubelet[3217]: W0120 01:42:13.256105 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.256114 kubelet[3217]: E0120 01:42:13.256116 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.257169 kubelet[3217]: E0120 01:42:13.256886 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.257169 kubelet[3217]: W0120 01:42:13.257089 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.257169 kubelet[3217]: E0120 01:42:13.257105 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.261490 kubelet[3217]: E0120 01:42:13.260909 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.261490 kubelet[3217]: W0120 01:42:13.260926 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.261490 kubelet[3217]: E0120 01:42:13.260938 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.265953 kubelet[3217]: E0120 01:42:13.265854 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.265953 kubelet[3217]: W0120 01:42:13.265871 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.265953 kubelet[3217]: E0120 01:42:13.265889 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.273350 kubelet[3217]: E0120 01:42:13.272633 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:13.278645 kubelet[3217]: E0120 01:42:13.278624 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.279078 kubelet[3217]: W0120 01:42:13.278849 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.279078 kubelet[3217]: E0120 01:42:13.278874 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.335486 kubelet[3217]: E0120 01:42:13.335454 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.335486 kubelet[3217]: W0120 01:42:13.335477 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.335762 kubelet[3217]: E0120 01:42:13.335497 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.335864 kubelet[3217]: E0120 01:42:13.335850 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.335913 kubelet[3217]: W0120 01:42:13.335864 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.336008 kubelet[3217]: E0120 01:42:13.335911 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337279 kubelet[3217]: E0120 01:42:13.336111 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337279 kubelet[3217]: W0120 01:42:13.336122 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337279 kubelet[3217]: E0120 01:42:13.336132 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337279 kubelet[3217]: E0120 01:42:13.336284 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337279 kubelet[3217]: W0120 01:42:13.336291 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337279 kubelet[3217]: E0120 01:42:13.336299 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337279 kubelet[3217]: E0120 01:42:13.336433 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337279 kubelet[3217]: W0120 01:42:13.336440 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337279 kubelet[3217]: E0120 01:42:13.336450 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337279 kubelet[3217]: E0120 01:42:13.336561 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337534 kubelet[3217]: W0120 01:42:13.336567 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337534 kubelet[3217]: E0120 01:42:13.336574 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337534 kubelet[3217]: E0120 01:42:13.336678 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337534 kubelet[3217]: W0120 01:42:13.336684 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337534 kubelet[3217]: E0120 01:42:13.336690 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337534 kubelet[3217]: E0120 01:42:13.336826 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337534 kubelet[3217]: W0120 01:42:13.336833 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337534 kubelet[3217]: E0120 01:42:13.336840 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337534 kubelet[3217]: E0120 01:42:13.336972 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337534 kubelet[3217]: W0120 01:42:13.336978 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337730 kubelet[3217]: E0120 01:42:13.336986 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337730 kubelet[3217]: E0120 01:42:13.337094 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337730 kubelet[3217]: W0120 01:42:13.337100 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337730 kubelet[3217]: E0120 01:42:13.337106 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337730 kubelet[3217]: E0120 01:42:13.337233 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337730 kubelet[3217]: W0120 01:42:13.337239 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337730 kubelet[3217]: E0120 01:42:13.337249 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337730 kubelet[3217]: E0120 01:42:13.337388 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337730 kubelet[3217]: W0120 01:42:13.337396 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337730 kubelet[3217]: E0120 01:42:13.337404 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337946 kubelet[3217]: E0120 01:42:13.337530 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337946 kubelet[3217]: W0120 01:42:13.337536 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337946 kubelet[3217]: E0120 01:42:13.337543 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337946 kubelet[3217]: E0120 01:42:13.337646 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337946 kubelet[3217]: W0120 01:42:13.337651 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337946 kubelet[3217]: E0120 01:42:13.337657 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337946 kubelet[3217]: E0120 01:42:13.337757 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.337946 kubelet[3217]: W0120 01:42:13.337763 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.337946 kubelet[3217]: E0120 01:42:13.337769 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.337946 kubelet[3217]: E0120 01:42:13.337898 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.338141 kubelet[3217]: W0120 01:42:13.337907 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.338141 kubelet[3217]: E0120 01:42:13.337915 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.338141 kubelet[3217]: E0120 01:42:13.338040 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.338141 kubelet[3217]: W0120 01:42:13.338047 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.338141 kubelet[3217]: E0120 01:42:13.338055 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.338238 kubelet[3217]: E0120 01:42:13.338154 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.338238 kubelet[3217]: W0120 01:42:13.338160 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.338238 kubelet[3217]: E0120 01:42:13.338166 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.338304 kubelet[3217]: E0120 01:42:13.338261 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.338304 kubelet[3217]: W0120 01:42:13.338267 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.338304 kubelet[3217]: E0120 01:42:13.338272 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.338460 kubelet[3217]: E0120 01:42:13.338368 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.338460 kubelet[3217]: W0120 01:42:13.338379 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.338460 kubelet[3217]: E0120 01:42:13.338386 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.346621 kubelet[3217]: E0120 01:42:13.346599 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.346621 kubelet[3217]: W0120 01:42:13.346649 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.346621 kubelet[3217]: E0120 01:42:13.346665 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.347379 kubelet[3217]: I0120 01:42:13.347260 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/44bdd32b-1d8e-4e5b-bb73-1e59535dcb96-socket-dir\") pod \"csi-node-driver-4gxvd\" (UID: \"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96\") " pod="calico-system/csi-node-driver-4gxvd" Jan 20 01:42:13.347847 kubelet[3217]: E0120 01:42:13.347686 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.347847 kubelet[3217]: W0120 01:42:13.347700 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.347847 kubelet[3217]: E0120 01:42:13.347714 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.347847 kubelet[3217]: I0120 01:42:13.347736 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/44bdd32b-1d8e-4e5b-bb73-1e59535dcb96-registration-dir\") pod \"csi-node-driver-4gxvd\" (UID: \"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96\") " pod="calico-system/csi-node-driver-4gxvd" Jan 20 01:42:13.348107 kubelet[3217]: E0120 01:42:13.348093 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.348492 kubelet[3217]: W0120 01:42:13.348128 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.348492 kubelet[3217]: E0120 01:42:13.348140 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.348492 kubelet[3217]: I0120 01:42:13.348161 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/44bdd32b-1d8e-4e5b-bb73-1e59535dcb96-varrun\") pod \"csi-node-driver-4gxvd\" (UID: \"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96\") " pod="calico-system/csi-node-driver-4gxvd" Jan 20 01:42:13.348492 kubelet[3217]: E0120 01:42:13.348439 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.348492 kubelet[3217]: W0120 01:42:13.348452 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.348492 kubelet[3217]: E0120 01:42:13.348465 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.349812 kubelet[3217]: E0120 01:42:13.349765 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.349812 kubelet[3217]: W0120 01:42:13.349798 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.349812 kubelet[3217]: E0120 01:42:13.349812 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.350047 kubelet[3217]: E0120 01:42:13.350035 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.350047 kubelet[3217]: W0120 01:42:13.350046 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.350164 kubelet[3217]: E0120 01:42:13.350055 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.350484 kubelet[3217]: E0120 01:42:13.350470 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.350484 kubelet[3217]: W0120 01:42:13.350482 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.350595 kubelet[3217]: E0120 01:42:13.350493 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.351352 kubelet[3217]: E0120 01:42:13.351333 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.351352 kubelet[3217]: W0120 01:42:13.351346 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.351516 kubelet[3217]: E0120 01:42:13.351359 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.351814 kubelet[3217]: I0120 01:42:13.351437 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qx28\" (UniqueName: \"kubernetes.io/projected/44bdd32b-1d8e-4e5b-bb73-1e59535dcb96-kube-api-access-6qx28\") pod \"csi-node-driver-4gxvd\" (UID: \"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96\") " pod="calico-system/csi-node-driver-4gxvd" Jan 20 01:42:13.351887 kubelet[3217]: E0120 01:42:13.351845 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.351887 kubelet[3217]: W0120 01:42:13.351855 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.351887 kubelet[3217]: E0120 01:42:13.351867 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.352580 kubelet[3217]: E0120 01:42:13.352555 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.352580 kubelet[3217]: W0120 01:42:13.352576 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.352827 kubelet[3217]: E0120 01:42:13.352594 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.352971 kubelet[3217]: E0120 01:42:13.352955 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.353360 kubelet[3217]: W0120 01:42:13.352969 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.353396 kubelet[3217]: E0120 01:42:13.353367 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.353420 kubelet[3217]: I0120 01:42:13.353392 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44bdd32b-1d8e-4e5b-bb73-1e59535dcb96-kubelet-dir\") pod \"csi-node-driver-4gxvd\" (UID: \"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96\") " pod="calico-system/csi-node-driver-4gxvd" Jan 20 01:42:13.353849 kubelet[3217]: E0120 01:42:13.353830 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.353849 kubelet[3217]: W0120 01:42:13.353845 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.354028 kubelet[3217]: E0120 01:42:13.353857 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.354097 kubelet[3217]: E0120 01:42:13.354085 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.354097 kubelet[3217]: W0120 01:42:13.354095 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.354150 kubelet[3217]: E0120 01:42:13.354103 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.354309 kubelet[3217]: E0120 01:42:13.354296 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.354309 kubelet[3217]: W0120 01:42:13.354307 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.354395 kubelet[3217]: E0120 01:42:13.354318 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.354492 kubelet[3217]: E0120 01:42:13.354480 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.354492 kubelet[3217]: W0120 01:42:13.354491 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.354539 kubelet[3217]: E0120 01:42:13.354500 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.380455 containerd[1737]: time="2026-01-20T01:42:13.380416543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wvjz7,Uid:62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:13.421735 containerd[1737]: time="2026-01-20T01:42:13.419859156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:13.422510 containerd[1737]: time="2026-01-20T01:42:13.422234076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:13.422815 containerd[1737]: time="2026-01-20T01:42:13.422682997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:13.423477 containerd[1737]: time="2026-01-20T01:42:13.423230077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:13.440113 systemd[1]: Started cri-containerd-14c3702623315018810abeccc2bd055db47f080dd97e6a38ced0bb030c7ea8c2.scope - libcontainer container 14c3702623315018810abeccc2bd055db47f080dd97e6a38ced0bb030c7ea8c2. Jan 20 01:42:13.455381 kubelet[3217]: E0120 01:42:13.455247 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.455381 kubelet[3217]: W0120 01:42:13.455268 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.455381 kubelet[3217]: E0120 01:42:13.455288 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.455806 kubelet[3217]: E0120 01:42:13.455675 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.455806 kubelet[3217]: W0120 01:42:13.455704 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.455806 kubelet[3217]: E0120 01:42:13.455718 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.455953 kubelet[3217]: E0120 01:42:13.455939 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.455953 kubelet[3217]: W0120 01:42:13.455951 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.456406 kubelet[3217]: E0120 01:42:13.455963 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.456600 kubelet[3217]: E0120 01:42:13.456501 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.456600 kubelet[3217]: W0120 01:42:13.456519 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.456600 kubelet[3217]: E0120 01:42:13.456540 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.457040 kubelet[3217]: E0120 01:42:13.456953 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.457040 kubelet[3217]: W0120 01:42:13.456973 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.457040 kubelet[3217]: E0120 01:42:13.456987 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.457536 kubelet[3217]: E0120 01:42:13.457451 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.457536 kubelet[3217]: W0120 01:42:13.457467 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.457536 kubelet[3217]: E0120 01:42:13.457484 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.458284 kubelet[3217]: E0120 01:42:13.458179 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.458284 kubelet[3217]: W0120 01:42:13.458196 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.458284 kubelet[3217]: E0120 01:42:13.458210 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.459114 kubelet[3217]: E0120 01:42:13.459092 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.459114 kubelet[3217]: W0120 01:42:13.459110 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.459329 kubelet[3217]: E0120 01:42:13.459124 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.459895 kubelet[3217]: E0120 01:42:13.459874 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.460081 kubelet[3217]: W0120 01:42:13.459981 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.460081 kubelet[3217]: E0120 01:42:13.459997 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.460428 kubelet[3217]: E0120 01:42:13.460357 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.460428 kubelet[3217]: W0120 01:42:13.460369 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.460428 kubelet[3217]: E0120 01:42:13.460383 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.461091 kubelet[3217]: E0120 01:42:13.460980 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.461091 kubelet[3217]: W0120 01:42:13.460992 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.461091 kubelet[3217]: E0120 01:42:13.461002 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.461357 kubelet[3217]: E0120 01:42:13.461326 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.461357 kubelet[3217]: W0120 01:42:13.461337 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.461357 kubelet[3217]: E0120 01:42:13.461347 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.461698 kubelet[3217]: E0120 01:42:13.461632 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.461698 kubelet[3217]: W0120 01:42:13.461643 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.461698 kubelet[3217]: E0120 01:42:13.461653 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.462014 kubelet[3217]: E0120 01:42:13.462002 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.462085 kubelet[3217]: W0120 01:42:13.462061 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.462085 kubelet[3217]: E0120 01:42:13.462075 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.462485 kubelet[3217]: E0120 01:42:13.462332 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.462485 kubelet[3217]: W0120 01:42:13.462343 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.462485 kubelet[3217]: E0120 01:42:13.462466 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.463602 kubelet[3217]: E0120 01:42:13.463496 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.463602 kubelet[3217]: W0120 01:42:13.463509 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.463602 kubelet[3217]: E0120 01:42:13.463519 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.464707 kubelet[3217]: E0120 01:42:13.464573 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.464707 kubelet[3217]: W0120 01:42:13.464585 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.464707 kubelet[3217]: E0120 01:42:13.464597 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.465338 containerd[1737]: time="2026-01-20T01:42:13.465095171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wvjz7,Uid:62ce4b8e-7632-4ac8-a62b-7cb6982b4e5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"14c3702623315018810abeccc2bd055db47f080dd97e6a38ced0bb030c7ea8c2\"" Jan 20 01:42:13.465751 kubelet[3217]: E0120 01:42:13.465653 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.465751 kubelet[3217]: W0120 01:42:13.465665 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.465751 kubelet[3217]: E0120 01:42:13.465676 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.466108 kubelet[3217]: E0120 01:42:13.465916 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.466108 kubelet[3217]: W0120 01:42:13.465926 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.466108 kubelet[3217]: E0120 01:42:13.465936 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.466680 kubelet[3217]: E0120 01:42:13.466667 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.466846 kubelet[3217]: W0120 01:42:13.466755 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.466846 kubelet[3217]: E0120 01:42:13.466772 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.467458 kubelet[3217]: E0120 01:42:13.467349 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.467458 kubelet[3217]: W0120 01:42:13.467360 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.467458 kubelet[3217]: E0120 01:42:13.467383 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.467898 kubelet[3217]: E0120 01:42:13.467770 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.467898 kubelet[3217]: W0120 01:42:13.467823 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.467898 kubelet[3217]: E0120 01:42:13.467836 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.468288 kubelet[3217]: E0120 01:42:13.468251 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.468288 kubelet[3217]: W0120 01:42:13.468265 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.468429 kubelet[3217]: E0120 01:42:13.468275 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.468742 kubelet[3217]: E0120 01:42:13.468664 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.468742 kubelet[3217]: W0120 01:42:13.468675 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.468742 kubelet[3217]: E0120 01:42:13.468685 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.469144 kubelet[3217]: E0120 01:42:13.469024 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.469144 kubelet[3217]: W0120 01:42:13.469035 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.469144 kubelet[3217]: E0120 01:42:13.469045 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:13.479643 kubelet[3217]: E0120 01:42:13.479626 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:13.479790 kubelet[3217]: W0120 01:42:13.479741 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:13.479790 kubelet[3217]: E0120 01:42:13.479758 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:14.350708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165844853.mount: Deactivated successfully. Jan 20 01:42:15.137818 containerd[1737]: time="2026-01-20T01:42:15.137038255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:15.139193 containerd[1737]: time="2026-01-20T01:42:15.139055496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 20 01:42:15.141436 containerd[1737]: time="2026-01-20T01:42:15.141134217Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:15.145299 containerd[1737]: time="2026-01-20T01:42:15.144908379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:15.145770 containerd[1737]: time="2026-01-20T01:42:15.145733699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.891947358s" Jan 20 01:42:15.145770 containerd[1737]: time="2026-01-20T01:42:15.145766219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 20 01:42:15.147951 containerd[1737]: time="2026-01-20T01:42:15.147667980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 01:42:15.165952 containerd[1737]: time="2026-01-20T01:42:15.165756830Z" level=info msg="CreateContainer within sandbox \"15cc09e45dcb8296aae57327d57e6aafec74166e37f95bc07d14f409f112975d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 01:42:15.195521 containerd[1737]: time="2026-01-20T01:42:15.195478325Z" level=info msg="CreateContainer within sandbox \"15cc09e45dcb8296aae57327d57e6aafec74166e37f95bc07d14f409f112975d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a51e0d089e85a69decd8d242b77b6b9c83b08140155a643354cc9dda15cd7dec\"" Jan 20 01:42:15.198815 containerd[1737]: time="2026-01-20T01:42:15.196171885Z" level=info msg="StartContainer for \"a51e0d089e85a69decd8d242b77b6b9c83b08140155a643354cc9dda15cd7dec\"" Jan 20 01:42:15.224955 systemd[1]: Started cri-containerd-a51e0d089e85a69decd8d242b77b6b9c83b08140155a643354cc9dda15cd7dec.scope - libcontainer container a51e0d089e85a69decd8d242b77b6b9c83b08140155a643354cc9dda15cd7dec. Jan 20 01:42:15.277097 containerd[1737]: time="2026-01-20T01:42:15.276939487Z" level=info msg="StartContainer for \"a51e0d089e85a69decd8d242b77b6b9c83b08140155a643354cc9dda15cd7dec\" returns successfully" Jan 20 01:42:15.417216 kubelet[3217]: E0120 01:42:15.415777 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:15.532925 kubelet[3217]: I0120 01:42:15.532341 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f64486dc8-9zqw7" podStartSLOduration=1.6367626180000001 podStartE2EDuration="3.532325618s" podCreationTimestamp="2026-01-20 01:42:12 +0000 UTC" firstStartedPulling="2026-01-20 01:42:13.25197334 +0000 UTC m=+26.949140141" lastFinishedPulling="2026-01-20 01:42:15.14753634 +0000 UTC m=+28.844703141" observedRunningTime="2026-01-20 01:42:15.531351057 +0000 UTC m=+29.228517858" watchObservedRunningTime="2026-01-20 01:42:15.532325618 +0000 UTC m=+29.229492459" Jan 20 01:42:15.553971 kubelet[3217]: E0120 01:42:15.553118 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.553971 kubelet[3217]: W0120 01:42:15.553139 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.553971 kubelet[3217]: E0120 01:42:15.553159 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.554370 kubelet[3217]: E0120 01:42:15.554225 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.554370 kubelet[3217]: W0120 01:42:15.554239 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.554370 kubelet[3217]: E0120 01:42:15.554282 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.554762 kubelet[3217]: E0120 01:42:15.554602 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.554762 kubelet[3217]: W0120 01:42:15.554614 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.554762 kubelet[3217]: E0120 01:42:15.554624 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.555006 kubelet[3217]: E0120 01:42:15.554931 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.555006 kubelet[3217]: W0120 01:42:15.554943 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.555006 kubelet[3217]: E0120 01:42:15.554954 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.555437 kubelet[3217]: E0120 01:42:15.555329 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.555437 kubelet[3217]: W0120 01:42:15.555342 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.555437 kubelet[3217]: E0120 01:42:15.555354 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.555600 kubelet[3217]: E0120 01:42:15.555591 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.555692 kubelet[3217]: W0120 01:42:15.555645 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.555692 kubelet[3217]: E0120 01:42:15.555659 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.555988 kubelet[3217]: E0120 01:42:15.555927 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.555988 kubelet[3217]: W0120 01:42:15.555940 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.555988 kubelet[3217]: E0120 01:42:15.555951 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.556455 kubelet[3217]: E0120 01:42:15.556288 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.556455 kubelet[3217]: W0120 01:42:15.556301 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.556455 kubelet[3217]: E0120 01:42:15.556314 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.556716 kubelet[3217]: E0120 01:42:15.556642 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.556716 kubelet[3217]: W0120 01:42:15.556654 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.556716 kubelet[3217]: E0120 01:42:15.556665 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.557138 kubelet[3217]: E0120 01:42:15.557050 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.557138 kubelet[3217]: W0120 01:42:15.557062 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.557138 kubelet[3217]: E0120 01:42:15.557072 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.557454 kubelet[3217]: E0120 01:42:15.557358 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.557454 kubelet[3217]: W0120 01:42:15.557370 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.557454 kubelet[3217]: E0120 01:42:15.557379 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.557741 kubelet[3217]: E0120 01:42:15.557651 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.557741 kubelet[3217]: W0120 01:42:15.557662 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.557741 kubelet[3217]: E0120 01:42:15.557671 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.558552 kubelet[3217]: E0120 01:42:15.558460 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.558552 kubelet[3217]: W0120 01:42:15.558472 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.558552 kubelet[3217]: E0120 01:42:15.558483 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.558807 kubelet[3217]: E0120 01:42:15.558741 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.558807 kubelet[3217]: W0120 01:42:15.558752 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.558934 kubelet[3217]: E0120 01:42:15.558793 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.559197 kubelet[3217]: E0120 01:42:15.559122 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.559197 kubelet[3217]: W0120 01:42:15.559134 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.559197 kubelet[3217]: E0120 01:42:15.559144 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.577635 kubelet[3217]: E0120 01:42:15.577613 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.577880 kubelet[3217]: W0120 01:42:15.577732 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.577880 kubelet[3217]: E0120 01:42:15.577752 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.578121 kubelet[3217]: E0120 01:42:15.578026 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.578121 kubelet[3217]: W0120 01:42:15.578039 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.578121 kubelet[3217]: E0120 01:42:15.578051 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.578294 kubelet[3217]: E0120 01:42:15.578285 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.578460 kubelet[3217]: W0120 01:42:15.578340 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.578460 kubelet[3217]: E0120 01:42:15.578354 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.579295 kubelet[3217]: E0120 01:42:15.579270 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.579465 kubelet[3217]: W0120 01:42:15.579355 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.579465 kubelet[3217]: E0120 01:42:15.579371 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.579685 kubelet[3217]: E0120 01:42:15.579670 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.579746 kubelet[3217]: W0120 01:42:15.579736 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.579824 kubelet[3217]: E0120 01:42:15.579813 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.580168 kubelet[3217]: E0120 01:42:15.580078 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.580168 kubelet[3217]: W0120 01:42:15.580090 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.580168 kubelet[3217]: E0120 01:42:15.580100 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.580330 kubelet[3217]: E0120 01:42:15.580320 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.580969 kubelet[3217]: W0120 01:42:15.580944 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.581079 kubelet[3217]: E0120 01:42:15.581067 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.581300 kubelet[3217]: E0120 01:42:15.581289 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.581371 kubelet[3217]: W0120 01:42:15.581362 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.581418 kubelet[3217]: E0120 01:42:15.581410 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.581890 kubelet[3217]: E0120 01:42:15.581838 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.581890 kubelet[3217]: W0120 01:42:15.581849 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.581890 kubelet[3217]: E0120 01:42:15.581859 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.582241 kubelet[3217]: E0120 01:42:15.582157 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.582241 kubelet[3217]: W0120 01:42:15.582168 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.582241 kubelet[3217]: E0120 01:42:15.582178 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.582449 kubelet[3217]: E0120 01:42:15.582380 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.582449 kubelet[3217]: W0120 01:42:15.582390 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.582449 kubelet[3217]: E0120 01:42:15.582400 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.582752 kubelet[3217]: E0120 01:42:15.582667 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.582752 kubelet[3217]: W0120 01:42:15.582680 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.582752 kubelet[3217]: E0120 01:42:15.582689 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.583148 kubelet[3217]: E0120 01:42:15.583053 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.583148 kubelet[3217]: W0120 01:42:15.583064 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.583148 kubelet[3217]: E0120 01:42:15.583074 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.583870 kubelet[3217]: E0120 01:42:15.583497 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.583870 kubelet[3217]: W0120 01:42:15.583507 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.583870 kubelet[3217]: E0120 01:42:15.583516 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.584038 kubelet[3217]: E0120 01:42:15.584027 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.584109 kubelet[3217]: W0120 01:42:15.584099 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.584162 kubelet[3217]: E0120 01:42:15.584152 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.584392 kubelet[3217]: E0120 01:42:15.584381 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.584455 kubelet[3217]: W0120 01:42:15.584445 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.584510 kubelet[3217]: E0120 01:42:15.584500 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.584848 kubelet[3217]: E0120 01:42:15.584837 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.584931 kubelet[3217]: W0120 01:42:15.584920 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.584980 kubelet[3217]: E0120 01:42:15.584971 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:15.585216 kubelet[3217]: E0120 01:42:15.585205 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:15.585300 kubelet[3217]: W0120 01:42:15.585267 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:15.585300 kubelet[3217]: E0120 01:42:15.585282 3217 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:16.381833 containerd[1737]: time="2026-01-20T01:42:16.381460853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:16.383253 containerd[1737]: time="2026-01-20T01:42:16.383224174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 20 01:42:16.385807 containerd[1737]: time="2026-01-20T01:42:16.385743855Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:16.388806 containerd[1737]: time="2026-01-20T01:42:16.388744217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:16.389594 containerd[1737]: time="2026-01-20T01:42:16.389371657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.241672397s" Jan 20 01:42:16.389594 containerd[1737]: time="2026-01-20T01:42:16.389405457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 20 01:42:16.396167 containerd[1737]: time="2026-01-20T01:42:16.396133141Z" level=info msg="CreateContainer within sandbox \"14c3702623315018810abeccc2bd055db47f080dd97e6a38ced0bb030c7ea8c2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 01:42:16.424509 containerd[1737]: time="2026-01-20T01:42:16.424461075Z" level=info msg="CreateContainer within sandbox \"14c3702623315018810abeccc2bd055db47f080dd97e6a38ced0bb030c7ea8c2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c84e3a4269c75db801ed018263ab9f7d1eba4eb5f28c8d0a718cfb85aefc1644\"" Jan 20 01:42:16.425938 containerd[1737]: time="2026-01-20T01:42:16.425914476Z" level=info msg="StartContainer for \"c84e3a4269c75db801ed018263ab9f7d1eba4eb5f28c8d0a718cfb85aefc1644\"" Jan 20 01:42:16.459178 systemd[1]: Started cri-containerd-c84e3a4269c75db801ed018263ab9f7d1eba4eb5f28c8d0a718cfb85aefc1644.scope - libcontainer container c84e3a4269c75db801ed018263ab9f7d1eba4eb5f28c8d0a718cfb85aefc1644. Jan 20 01:42:16.489536 containerd[1737]: time="2026-01-20T01:42:16.489485869Z" level=info msg="StartContainer for \"c84e3a4269c75db801ed018263ab9f7d1eba4eb5f28c8d0a718cfb85aefc1644\" returns successfully" Jan 20 01:42:16.498411 systemd[1]: cri-containerd-c84e3a4269c75db801ed018263ab9f7d1eba4eb5f28c8d0a718cfb85aefc1644.scope: Deactivated successfully. Jan 20 01:42:16.516305 kubelet[3217]: I0120 01:42:16.516281 3217 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:42:16.524087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c84e3a4269c75db801ed018263ab9f7d1eba4eb5f28c8d0a718cfb85aefc1644-rootfs.mount: Deactivated successfully. Jan 20 01:42:17.415191 kubelet[3217]: E0120 01:42:17.415148 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:17.581073 containerd[1737]: time="2026-01-20T01:42:17.580994068Z" level=info msg="shim disconnected" id=c84e3a4269c75db801ed018263ab9f7d1eba4eb5f28c8d0a718cfb85aefc1644 namespace=k8s.io Jan 20 01:42:17.581073 containerd[1737]: time="2026-01-20T01:42:17.581051148Z" level=warning msg="cleaning up after shim disconnected" id=c84e3a4269c75db801ed018263ab9f7d1eba4eb5f28c8d0a718cfb85aefc1644 namespace=k8s.io Jan 20 01:42:17.581073 containerd[1737]: time="2026-01-20T01:42:17.581059628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:42:17.590919 containerd[1737]: time="2026-01-20T01:42:17.590872393Z" level=warning msg="cleanup warnings time=\"2026-01-20T01:42:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 01:42:18.523917 containerd[1737]: time="2026-01-20T01:42:18.523843032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 01:42:19.416318 kubelet[3217]: E0120 01:42:19.415962 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:20.711456 containerd[1737]: time="2026-01-20T01:42:20.711410834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:20.714051 containerd[1737]: time="2026-01-20T01:42:20.714024555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 20 01:42:20.717851 containerd[1737]: time="2026-01-20T01:42:20.717824117Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:20.721277 containerd[1737]: time="2026-01-20T01:42:20.721247439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:20.722070 containerd[1737]: time="2026-01-20T01:42:20.722045000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.198150488s" Jan 20 01:42:20.722179 containerd[1737]: time="2026-01-20T01:42:20.722163280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 20 01:42:20.728839 containerd[1737]: time="2026-01-20T01:42:20.728807683Z" level=info msg="CreateContainer within sandbox \"14c3702623315018810abeccc2bd055db47f080dd97e6a38ced0bb030c7ea8c2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:42:20.758696 containerd[1737]: time="2026-01-20T01:42:20.758654978Z" level=info msg="CreateContainer within sandbox \"14c3702623315018810abeccc2bd055db47f080dd97e6a38ced0bb030c7ea8c2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0885951c6cd8f88d9194bd6fec299d9ded81953141f84e5e8d27627169f294f2\"" Jan 20 01:42:20.760679 containerd[1737]: time="2026-01-20T01:42:20.760526379Z" level=info msg="StartContainer for \"0885951c6cd8f88d9194bd6fec299d9ded81953141f84e5e8d27627169f294f2\"" Jan 20 01:42:20.791929 systemd[1]: Started cri-containerd-0885951c6cd8f88d9194bd6fec299d9ded81953141f84e5e8d27627169f294f2.scope - libcontainer container 0885951c6cd8f88d9194bd6fec299d9ded81953141f84e5e8d27627169f294f2. Jan 20 01:42:20.819623 containerd[1737]: time="2026-01-20T01:42:20.819578130Z" level=info msg="StartContainer for \"0885951c6cd8f88d9194bd6fec299d9ded81953141f84e5e8d27627169f294f2\" returns successfully" Jan 20 01:42:21.415001 kubelet[3217]: E0120 01:42:21.414951 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:21.965390 systemd[1]: cri-containerd-0885951c6cd8f88d9194bd6fec299d9ded81953141f84e5e8d27627169f294f2.scope: Deactivated successfully. Jan 20 01:42:21.984864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0885951c6cd8f88d9194bd6fec299d9ded81953141f84e5e8d27627169f294f2-rootfs.mount: Deactivated successfully. Jan 20 01:42:22.061139 kubelet[3217]: I0120 01:42:22.060729 3217 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 01:42:22.826850 containerd[1737]: time="2026-01-20T01:42:22.825108572Z" level=info msg="shim disconnected" id=0885951c6cd8f88d9194bd6fec299d9ded81953141f84e5e8d27627169f294f2 namespace=k8s.io Jan 20 01:42:22.826850 containerd[1737]: time="2026-01-20T01:42:22.825164012Z" level=warning msg="cleaning up after shim disconnected" id=0885951c6cd8f88d9194bd6fec299d9ded81953141f84e5e8d27627169f294f2 namespace=k8s.io Jan 20 01:42:22.826850 containerd[1737]: time="2026-01-20T01:42:22.825173292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:42:22.832154 systemd[1]: Created slice kubepods-burstable-pod138649de_b257_4de2_b470_3f54b1f24475.slice - libcontainer container kubepods-burstable-pod138649de_b257_4de2_b470_3f54b1f24475.slice. Jan 20 01:42:22.844414 systemd[1]: Created slice kubepods-besteffort-pod44bdd32b_1d8e_4e5b_bb73_1e59535dcb96.slice - libcontainer container kubepods-besteffort-pod44bdd32b_1d8e_4e5b_bb73_1e59535dcb96.slice. Jan 20 01:42:22.853996 containerd[1737]: time="2026-01-20T01:42:22.852348581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gxvd,Uid:44bdd32b-1d8e-4e5b-bb73-1e59535dcb96,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:22.860272 systemd[1]: Created slice kubepods-besteffort-poda793b124_6073_4604_9c24_ad5326cb3836.slice - libcontainer container kubepods-besteffort-poda793b124_6073_4604_9c24_ad5326cb3836.slice. Jan 20 01:42:22.873350 systemd[1]: Created slice kubepods-besteffort-pod40f43ea6_fce8_4f79_952b_c7d866e60aed.slice - libcontainer container kubepods-besteffort-pod40f43ea6_fce8_4f79_952b_c7d866e60aed.slice. Jan 20 01:42:22.890573 systemd[1]: Created slice kubepods-besteffort-podd821eeb9_a64e_4dc2_bbef_b0976a3bf49a.slice - libcontainer container kubepods-besteffort-podd821eeb9_a64e_4dc2_bbef_b0976a3bf49a.slice. Jan 20 01:42:22.903531 systemd[1]: Created slice kubepods-burstable-pod8eac5578_1c74_4107_a02f_d780338d63d7.slice - libcontainer container kubepods-burstable-pod8eac5578_1c74_4107_a02f_d780338d63d7.slice. Jan 20 01:42:22.911212 systemd[1]: Created slice kubepods-besteffort-pod68f5545e_7661_40cf_baeb_c5c30a862135.slice - libcontainer container kubepods-besteffort-pod68f5545e_7661_40cf_baeb_c5c30a862135.slice. Jan 20 01:42:22.921248 systemd[1]: Created slice kubepods-besteffort-pod8c28c5ae_f540_4875_a2fd_481f9d148cbd.slice - libcontainer container kubepods-besteffort-pod8c28c5ae_f540_4875_a2fd_481f9d148cbd.slice. Jan 20 01:42:22.923713 kubelet[3217]: I0120 01:42:22.923344 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40f43ea6-fce8-4f79-952b-c7d866e60aed-whisker-ca-bundle\") pod \"whisker-d78647bdf-6kjdc\" (UID: \"40f43ea6-fce8-4f79-952b-c7d866e60aed\") " pod="calico-system/whisker-d78647bdf-6kjdc" Jan 20 01:42:22.923713 kubelet[3217]: I0120 01:42:22.923381 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8c28c5ae-f540-4875-a2fd-481f9d148cbd-calico-apiserver-certs\") pod \"calico-apiserver-588969c7f9-g5sn6\" (UID: \"8c28c5ae-f540-4875-a2fd-481f9d148cbd\") " pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" Jan 20 01:42:22.923713 kubelet[3217]: I0120 01:42:22.923398 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68f5545e-7661-40cf-baeb-c5c30a862135-goldmane-ca-bundle\") pod \"goldmane-666569f655-qkv75\" (UID: \"68f5545e-7661-40cf-baeb-c5c30a862135\") " pod="calico-system/goldmane-666569f655-qkv75" Jan 20 01:42:22.923713 kubelet[3217]: I0120 01:42:22.923607 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddcnl\" (UniqueName: \"kubernetes.io/projected/8c28c5ae-f540-4875-a2fd-481f9d148cbd-kube-api-access-ddcnl\") pod \"calico-apiserver-588969c7f9-g5sn6\" (UID: \"8c28c5ae-f540-4875-a2fd-481f9d148cbd\") " pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" Jan 20 01:42:22.923713 kubelet[3217]: I0120 01:42:22.923631 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a793b124-6073-4604-9c24-ad5326cb3836-tigera-ca-bundle\") pod \"calico-kube-controllers-59886bc69c-2p6tc\" (UID: \"a793b124-6073-4604-9c24-ad5326cb3836\") " pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" Jan 20 01:42:22.924116 kubelet[3217]: I0120 01:42:22.923646 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/68f5545e-7661-40cf-baeb-c5c30a862135-goldmane-key-pair\") pod \"goldmane-666569f655-qkv75\" (UID: \"68f5545e-7661-40cf-baeb-c5c30a862135\") " pod="calico-system/goldmane-666569f655-qkv75" Jan 20 01:42:22.927306 kubelet[3217]: I0120 01:42:22.924196 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/97a804d9-65a3-4df8-a009-6289887849fb-calico-apiserver-certs\") pod \"calico-apiserver-7dd6fbd444-88ccv\" (UID: \"97a804d9-65a3-4df8-a009-6289887849fb\") " pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" Jan 20 01:42:22.927306 kubelet[3217]: I0120 01:42:22.926670 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhlh2\" (UniqueName: \"kubernetes.io/projected/a793b124-6073-4604-9c24-ad5326cb3836-kube-api-access-fhlh2\") pod \"calico-kube-controllers-59886bc69c-2p6tc\" (UID: \"a793b124-6073-4604-9c24-ad5326cb3836\") " pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" Jan 20 01:42:22.927306 kubelet[3217]: I0120 01:42:22.926737 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40f43ea6-fce8-4f79-952b-c7d866e60aed-whisker-backend-key-pair\") pod \"whisker-d78647bdf-6kjdc\" (UID: \"40f43ea6-fce8-4f79-952b-c7d866e60aed\") " pod="calico-system/whisker-d78647bdf-6kjdc" Jan 20 01:42:22.927306 kubelet[3217]: I0120 01:42:22.926756 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2sg\" (UniqueName: \"kubernetes.io/projected/40f43ea6-fce8-4f79-952b-c7d866e60aed-kube-api-access-bc2sg\") pod \"whisker-d78647bdf-6kjdc\" (UID: \"40f43ea6-fce8-4f79-952b-c7d866e60aed\") " pod="calico-system/whisker-d78647bdf-6kjdc" Jan 20 01:42:22.927306 kubelet[3217]: I0120 01:42:22.926797 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/138649de-b257-4de2-b470-3f54b1f24475-config-volume\") pod \"coredns-674b8bbfcf-ddgxn\" (UID: \"138649de-b257-4de2-b470-3f54b1f24475\") " pod="kube-system/coredns-674b8bbfcf-ddgxn" Jan 20 01:42:22.927483 kubelet[3217]: I0120 01:42:22.926817 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lrb8\" (UniqueName: \"kubernetes.io/projected/138649de-b257-4de2-b470-3f54b1f24475-kube-api-access-9lrb8\") pod \"coredns-674b8bbfcf-ddgxn\" (UID: \"138649de-b257-4de2-b470-3f54b1f24475\") " pod="kube-system/coredns-674b8bbfcf-ddgxn" Jan 20 01:42:22.927483 kubelet[3217]: I0120 01:42:22.926844 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xth98\" (UniqueName: \"kubernetes.io/projected/97a804d9-65a3-4df8-a009-6289887849fb-kube-api-access-xth98\") pod \"calico-apiserver-7dd6fbd444-88ccv\" (UID: \"97a804d9-65a3-4df8-a009-6289887849fb\") " pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" Jan 20 01:42:22.927483 kubelet[3217]: I0120 01:42:22.926867 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgvkr\" (UniqueName: \"kubernetes.io/projected/d821eeb9-a64e-4dc2-bbef-b0976a3bf49a-kube-api-access-cgvkr\") pod \"calico-apiserver-588969c7f9-dsqq7\" (UID: \"d821eeb9-a64e-4dc2-bbef-b0976a3bf49a\") " pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" Jan 20 01:42:22.927483 kubelet[3217]: I0120 01:42:22.926882 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88pg\" (UniqueName: \"kubernetes.io/projected/8eac5578-1c74-4107-a02f-d780338d63d7-kube-api-access-b88pg\") pod \"coredns-674b8bbfcf-m78tf\" (UID: \"8eac5578-1c74-4107-a02f-d780338d63d7\") " pod="kube-system/coredns-674b8bbfcf-m78tf" Jan 20 01:42:22.927483 kubelet[3217]: I0120 01:42:22.926898 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f5545e-7661-40cf-baeb-c5c30a862135-config\") pod \"goldmane-666569f655-qkv75\" (UID: \"68f5545e-7661-40cf-baeb-c5c30a862135\") " pod="calico-system/goldmane-666569f655-qkv75" Jan 20 01:42:22.927595 kubelet[3217]: I0120 01:42:22.926926 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fqhz\" (UniqueName: \"kubernetes.io/projected/68f5545e-7661-40cf-baeb-c5c30a862135-kube-api-access-4fqhz\") pod \"goldmane-666569f655-qkv75\" (UID: \"68f5545e-7661-40cf-baeb-c5c30a862135\") " pod="calico-system/goldmane-666569f655-qkv75" Jan 20 01:42:22.927595 kubelet[3217]: I0120 01:42:22.926961 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d821eeb9-a64e-4dc2-bbef-b0976a3bf49a-calico-apiserver-certs\") pod \"calico-apiserver-588969c7f9-dsqq7\" (UID: \"d821eeb9-a64e-4dc2-bbef-b0976a3bf49a\") " pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" Jan 20 01:42:22.927595 kubelet[3217]: I0120 01:42:22.926982 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eac5578-1c74-4107-a02f-d780338d63d7-config-volume\") pod \"coredns-674b8bbfcf-m78tf\" (UID: \"8eac5578-1c74-4107-a02f-d780338d63d7\") " pod="kube-system/coredns-674b8bbfcf-m78tf" Jan 20 01:42:22.934053 systemd[1]: Created slice kubepods-besteffort-pod97a804d9_65a3_4df8_a009_6289887849fb.slice - libcontainer container kubepods-besteffort-pod97a804d9_65a3_4df8_a009_6289887849fb.slice. Jan 20 01:42:22.978017 containerd[1737]: time="2026-01-20T01:42:22.977967138Z" level=error msg="Failed to destroy network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:22.981143 containerd[1737]: time="2026-01-20T01:42:22.980636299Z" level=error msg="encountered an error cleaning up failed sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:22.981143 containerd[1737]: time="2026-01-20T01:42:22.980876379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gxvd,Uid:44bdd32b-1d8e-4e5b-bb73-1e59535dcb96,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:22.982104 kubelet[3217]: E0120 01:42:22.982067 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:22.982212 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295-shm.mount: Deactivated successfully. Jan 20 01:42:22.982992 kubelet[3217]: E0120 01:42:22.982882 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gxvd" Jan 20 01:42:22.983350 kubelet[3217]: E0120 01:42:22.982972 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gxvd" Jan 20 01:42:22.983350 kubelet[3217]: E0120 01:42:22.983061 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:23.138769 containerd[1737]: time="2026-01-20T01:42:23.137411104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddgxn,Uid:138649de-b257-4de2-b470-3f54b1f24475,Namespace:kube-system,Attempt:0,}" Jan 20 01:42:23.168738 containerd[1737]: time="2026-01-20T01:42:23.168697073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59886bc69c-2p6tc,Uid:a793b124-6073-4604-9c24-ad5326cb3836,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:23.191166 containerd[1737]: time="2026-01-20T01:42:23.190991119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d78647bdf-6kjdc,Uid:40f43ea6-fce8-4f79-952b-c7d866e60aed,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:23.196058 containerd[1737]: time="2026-01-20T01:42:23.195752921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588969c7f9-dsqq7,Uid:d821eeb9-a64e-4dc2-bbef-b0976a3bf49a,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:42:23.209441 containerd[1737]: time="2026-01-20T01:42:23.209398084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m78tf,Uid:8eac5578-1c74-4107-a02f-d780338d63d7,Namespace:kube-system,Attempt:0,}" Jan 20 01:42:23.215618 containerd[1737]: time="2026-01-20T01:42:23.215556966Z" level=error msg="Failed to destroy network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.216753 containerd[1737]: time="2026-01-20T01:42:23.216722767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qkv75,Uid:68f5545e-7661-40cf-baeb-c5c30a862135,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:23.219072 containerd[1737]: time="2026-01-20T01:42:23.219034687Z" level=error msg="encountered an error cleaning up failed sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.219312 containerd[1737]: time="2026-01-20T01:42:23.219199967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddgxn,Uid:138649de-b257-4de2-b470-3f54b1f24475,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.221446 kubelet[3217]: E0120 01:42:23.221000 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.221446 kubelet[3217]: E0120 01:42:23.221059 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ddgxn" Jan 20 01:42:23.221446 kubelet[3217]: E0120 01:42:23.221078 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ddgxn" Jan 20 01:42:23.221638 kubelet[3217]: E0120 01:42:23.221125 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ddgxn_kube-system(138649de-b257-4de2-b470-3f54b1f24475)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ddgxn_kube-system(138649de-b257-4de2-b470-3f54b1f24475)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ddgxn" podUID="138649de-b257-4de2-b470-3f54b1f24475" Jan 20 01:42:23.232215 containerd[1737]: time="2026-01-20T01:42:23.232006291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588969c7f9-g5sn6,Uid:8c28c5ae-f540-4875-a2fd-481f9d148cbd,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:42:23.239231 containerd[1737]: time="2026-01-20T01:42:23.239176773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dd6fbd444-88ccv,Uid:97a804d9-65a3-4df8-a009-6289887849fb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:42:23.259509 containerd[1737]: time="2026-01-20T01:42:23.259365619Z" level=error msg="Failed to destroy network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.259790 containerd[1737]: time="2026-01-20T01:42:23.259660179Z" level=error msg="encountered an error cleaning up failed sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.259790 containerd[1737]: time="2026-01-20T01:42:23.259709419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59886bc69c-2p6tc,Uid:a793b124-6073-4604-9c24-ad5326cb3836,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.260241 kubelet[3217]: E0120 01:42:23.259917 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.260241 kubelet[3217]: E0120 01:42:23.259976 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" Jan 20 01:42:23.260241 kubelet[3217]: E0120 01:42:23.259997 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" Jan 20 01:42:23.260456 kubelet[3217]: E0120 01:42:23.260043 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59886bc69c-2p6tc_calico-system(a793b124-6073-4604-9c24-ad5326cb3836)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59886bc69c-2p6tc_calico-system(a793b124-6073-4604-9c24-ad5326cb3836)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:42:23.296041 containerd[1737]: time="2026-01-20T01:42:23.295996789Z" level=error msg="Failed to destroy network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.296470 containerd[1737]: time="2026-01-20T01:42:23.296445349Z" level=error msg="encountered an error cleaning up failed sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.296711 containerd[1737]: time="2026-01-20T01:42:23.296625709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d78647bdf-6kjdc,Uid:40f43ea6-fce8-4f79-952b-c7d866e60aed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.296944 kubelet[3217]: E0120 01:42:23.296905 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.297600 kubelet[3217]: E0120 01:42:23.296964 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d78647bdf-6kjdc" Jan 20 01:42:23.297600 kubelet[3217]: E0120 01:42:23.296984 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d78647bdf-6kjdc" Jan 20 01:42:23.297600 kubelet[3217]: E0120 01:42:23.297029 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d78647bdf-6kjdc_calico-system(40f43ea6-fce8-4f79-952b-c7d866e60aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d78647bdf-6kjdc_calico-system(40f43ea6-fce8-4f79-952b-c7d866e60aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d78647bdf-6kjdc" podUID="40f43ea6-fce8-4f79-952b-c7d866e60aed" Jan 20 01:42:23.410823 containerd[1737]: time="2026-01-20T01:42:23.410732182Z" level=error msg="Failed to destroy network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.411068 containerd[1737]: time="2026-01-20T01:42:23.411041302Z" level=error msg="encountered an error cleaning up failed sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.411112 containerd[1737]: time="2026-01-20T01:42:23.411089102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588969c7f9-dsqq7,Uid:d821eeb9-a64e-4dc2-bbef-b0976a3bf49a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.411697 kubelet[3217]: E0120 01:42:23.411309 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.411697 kubelet[3217]: E0120 01:42:23.411368 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" Jan 20 01:42:23.411697 kubelet[3217]: E0120 01:42:23.411388 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" Jan 20 01:42:23.411900 kubelet[3217]: E0120 01:42:23.411433 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-588969c7f9-dsqq7_calico-apiserver(d821eeb9-a64e-4dc2-bbef-b0976a3bf49a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-588969c7f9-dsqq7_calico-apiserver(d821eeb9-a64e-4dc2-bbef-b0976a3bf49a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:42:23.425973 containerd[1737]: time="2026-01-20T01:42:23.425928227Z" level=error msg="Failed to destroy network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.426397 containerd[1737]: time="2026-01-20T01:42:23.426373187Z" level=error msg="encountered an error cleaning up failed sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.427177 containerd[1737]: time="2026-01-20T01:42:23.426653147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m78tf,Uid:8eac5578-1c74-4107-a02f-d780338d63d7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.427353 kubelet[3217]: E0120 01:42:23.426846 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.427353 kubelet[3217]: E0120 01:42:23.426898 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m78tf" Jan 20 01:42:23.427353 kubelet[3217]: E0120 01:42:23.426917 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m78tf" Jan 20 01:42:23.427467 kubelet[3217]: E0120 01:42:23.426963 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m78tf_kube-system(8eac5578-1c74-4107-a02f-d780338d63d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m78tf_kube-system(8eac5578-1c74-4107-a02f-d780338d63d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m78tf" podUID="8eac5578-1c74-4107-a02f-d780338d63d7" Jan 20 01:42:23.434513 containerd[1737]: time="2026-01-20T01:42:23.434472189Z" level=error msg="Failed to destroy network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.435092 containerd[1737]: time="2026-01-20T01:42:23.435002149Z" level=error msg="encountered an error cleaning up failed sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.435092 containerd[1737]: time="2026-01-20T01:42:23.435052589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588969c7f9-g5sn6,Uid:8c28c5ae-f540-4875-a2fd-481f9d148cbd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.435403 kubelet[3217]: E0120 01:42:23.435376 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.435966 kubelet[3217]: E0120 01:42:23.435507 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" Jan 20 01:42:23.435966 kubelet[3217]: E0120 01:42:23.435529 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" Jan 20 01:42:23.435966 kubelet[3217]: E0120 01:42:23.435580 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-588969c7f9-g5sn6_calico-apiserver(8c28c5ae-f540-4875-a2fd-481f9d148cbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-588969c7f9-g5sn6_calico-apiserver(8c28c5ae-f540-4875-a2fd-481f9d148cbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:42:23.446693 containerd[1737]: time="2026-01-20T01:42:23.446646313Z" level=error msg="Failed to destroy network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.447501 containerd[1737]: time="2026-01-20T01:42:23.447386113Z" level=error msg="encountered an error cleaning up failed sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.447501 containerd[1737]: time="2026-01-20T01:42:23.447462473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dd6fbd444-88ccv,Uid:97a804d9-65a3-4df8-a009-6289887849fb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.448214 kubelet[3217]: E0120 01:42:23.447852 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.448214 kubelet[3217]: E0120 01:42:23.448101 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" Jan 20 01:42:23.448214 kubelet[3217]: E0120 01:42:23.448120 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" Jan 20 01:42:23.448358 kubelet[3217]: E0120 01:42:23.448173 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dd6fbd444-88ccv_calico-apiserver(97a804d9-65a3-4df8-a009-6289887849fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dd6fbd444-88ccv_calico-apiserver(97a804d9-65a3-4df8-a009-6289887849fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:42:23.452173 containerd[1737]: time="2026-01-20T01:42:23.452134194Z" level=error msg="Failed to destroy network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.452443 containerd[1737]: time="2026-01-20T01:42:23.452418554Z" level=error msg="encountered an error cleaning up failed sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.452490 containerd[1737]: time="2026-01-20T01:42:23.452470634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qkv75,Uid:68f5545e-7661-40cf-baeb-c5c30a862135,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.452692 kubelet[3217]: E0120 01:42:23.452656 3217 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.452758 kubelet[3217]: E0120 01:42:23.452714 3217 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qkv75" Jan 20 01:42:23.452758 kubelet[3217]: E0120 01:42:23.452734 3217 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qkv75" Jan 20 01:42:23.452894 kubelet[3217]: E0120 01:42:23.452777 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qkv75_calico-system(68f5545e-7661-40cf-baeb-c5c30a862135)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qkv75_calico-system(68f5545e-7661-40cf-baeb-c5c30a862135)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:42:23.531722 kubelet[3217]: I0120 01:42:23.531110 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:23.533030 containerd[1737]: time="2026-01-20T01:42:23.532988937Z" level=info msg="StopPodSandbox for \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\"" Jan 20 01:42:23.533561 containerd[1737]: time="2026-01-20T01:42:23.533538897Z" level=info msg="Ensure that sandbox 78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca in task-service has been cleanup successfully" Jan 20 01:42:23.534460 kubelet[3217]: I0120 01:42:23.534440 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:23.535029 containerd[1737]: time="2026-01-20T01:42:23.535007218Z" level=info msg="StopPodSandbox for \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\"" Jan 20 01:42:23.535457 containerd[1737]: time="2026-01-20T01:42:23.535244978Z" level=info msg="Ensure that sandbox 2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4 in task-service has been cleanup successfully" Jan 20 01:42:23.538583 kubelet[3217]: I0120 01:42:23.538553 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:23.539242 containerd[1737]: time="2026-01-20T01:42:23.539214459Z" level=info msg="StopPodSandbox for \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\"" Jan 20 01:42:23.540937 containerd[1737]: time="2026-01-20T01:42:23.540908700Z" level=info msg="Ensure that sandbox ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295 in task-service has been cleanup successfully" Jan 20 01:42:23.546339 containerd[1737]: time="2026-01-20T01:42:23.546251501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 01:42:23.547776 kubelet[3217]: I0120 01:42:23.547113 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:23.548696 containerd[1737]: time="2026-01-20T01:42:23.548357782Z" level=info msg="StopPodSandbox for \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\"" Jan 20 01:42:23.550251 kubelet[3217]: I0120 01:42:23.549678 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:23.552974 containerd[1737]: time="2026-01-20T01:42:23.552944143Z" level=info msg="Ensure that sandbox a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be in task-service has been cleanup successfully" Jan 20 01:42:23.554401 containerd[1737]: time="2026-01-20T01:42:23.554369583Z" level=info msg="StopPodSandbox for \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\"" Jan 20 01:42:23.558978 containerd[1737]: time="2026-01-20T01:42:23.558948665Z" level=info msg="Ensure that sandbox 0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e in task-service has been cleanup successfully" Jan 20 01:42:23.563619 kubelet[3217]: I0120 01:42:23.563406 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:23.565197 containerd[1737]: time="2026-01-20T01:42:23.565113067Z" level=info msg="StopPodSandbox for \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\"" Jan 20 01:42:23.565475 containerd[1737]: time="2026-01-20T01:42:23.565317627Z" level=info msg="Ensure that sandbox b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6 in task-service has been cleanup successfully" Jan 20 01:42:23.567376 kubelet[3217]: I0120 01:42:23.567277 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:23.568505 containerd[1737]: time="2026-01-20T01:42:23.568440347Z" level=info msg="StopPodSandbox for \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\"" Jan 20 01:42:23.569138 containerd[1737]: time="2026-01-20T01:42:23.569017508Z" level=info msg="Ensure that sandbox b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830 in task-service has been cleanup successfully" Jan 20 01:42:23.574019 kubelet[3217]: I0120 01:42:23.573963 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:23.577549 containerd[1737]: time="2026-01-20T01:42:23.577014790Z" level=info msg="StopPodSandbox for \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\"" Jan 20 01:42:23.578205 containerd[1737]: time="2026-01-20T01:42:23.578179430Z" level=info msg="Ensure that sandbox 48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772 in task-service has been cleanup successfully" Jan 20 01:42:23.587957 kubelet[3217]: I0120 01:42:23.587893 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:23.591248 containerd[1737]: time="2026-01-20T01:42:23.590597514Z" level=info msg="StopPodSandbox for \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\"" Jan 20 01:42:23.591248 containerd[1737]: time="2026-01-20T01:42:23.591032914Z" level=info msg="Ensure that sandbox 53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2 in task-service has been cleanup successfully" Jan 20 01:42:23.624970 containerd[1737]: time="2026-01-20T01:42:23.624915524Z" level=error msg="StopPodSandbox for \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\" failed" error="failed to destroy network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.625271 kubelet[3217]: E0120 01:42:23.625227 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:23.625589 kubelet[3217]: E0120 01:42:23.625293 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4"} Jan 20 01:42:23.625589 kubelet[3217]: E0120 01:42:23.625374 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"138649de-b257-4de2-b470-3f54b1f24475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:23.625589 kubelet[3217]: E0120 01:42:23.625396 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"138649de-b257-4de2-b470-3f54b1f24475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ddgxn" podUID="138649de-b257-4de2-b470-3f54b1f24475" Jan 20 01:42:23.659450 containerd[1737]: time="2026-01-20T01:42:23.659396654Z" level=error msg="StopPodSandbox for \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\" failed" error="failed to destroy network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.660020 kubelet[3217]: E0120 01:42:23.659882 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:23.660020 kubelet[3217]: E0120 01:42:23.659940 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772"} Jan 20 01:42:23.660020 kubelet[3217]: E0120 01:42:23.659974 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d821eeb9-a64e-4dc2-bbef-b0976a3bf49a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:23.660020 kubelet[3217]: E0120 01:42:23.660002 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d821eeb9-a64e-4dc2-bbef-b0976a3bf49a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:42:23.672335 containerd[1737]: time="2026-01-20T01:42:23.670809617Z" level=error msg="StopPodSandbox for \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\" failed" error="failed to destroy network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.672871 kubelet[3217]: E0120 01:42:23.672759 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:23.672871 kubelet[3217]: E0120 01:42:23.672820 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2"} Jan 20 01:42:23.672871 kubelet[3217]: E0120 01:42:23.672855 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68f5545e-7661-40cf-baeb-c5c30a862135\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:23.672871 kubelet[3217]: E0120 01:42:23.672879 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68f5545e-7661-40cf-baeb-c5c30a862135\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:42:23.674603 containerd[1737]: time="2026-01-20T01:42:23.674459938Z" level=error msg="StopPodSandbox for \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\" failed" error="failed to destroy network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.674794 kubelet[3217]: E0120 01:42:23.674658 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:23.674794 kubelet[3217]: E0120 01:42:23.674695 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830"} Jan 20 01:42:23.674794 kubelet[3217]: E0120 01:42:23.674719 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97a804d9-65a3-4df8-a009-6289887849fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:23.674794 kubelet[3217]: E0120 01:42:23.674739 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97a804d9-65a3-4df8-a009-6289887849fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:42:23.679476 containerd[1737]: time="2026-01-20T01:42:23.679351819Z" level=error msg="StopPodSandbox for \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\" failed" error="failed to destroy network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.679680 kubelet[3217]: E0120 01:42:23.679521 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:23.679680 kubelet[3217]: E0120 01:42:23.679554 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca"} Jan 20 01:42:23.679680 kubelet[3217]: E0120 01:42:23.679577 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40f43ea6-fce8-4f79-952b-c7d866e60aed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:23.679680 kubelet[3217]: E0120 01:42:23.679594 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40f43ea6-fce8-4f79-952b-c7d866e60aed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d78647bdf-6kjdc" podUID="40f43ea6-fce8-4f79-952b-c7d866e60aed" Jan 20 01:42:23.679980 containerd[1737]: time="2026-01-20T01:42:23.679949299Z" level=error msg="StopPodSandbox for \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\" failed" error="failed to destroy network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.680193 kubelet[3217]: E0120 01:42:23.680080 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:23.680193 kubelet[3217]: E0120 01:42:23.680122 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295"} Jan 20 01:42:23.680193 kubelet[3217]: E0120 01:42:23.680144 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:23.680193 kubelet[3217]: E0120 01:42:23.680166 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:23.683754 containerd[1737]: time="2026-01-20T01:42:23.683471100Z" level=error msg="StopPodSandbox for \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\" failed" error="failed to destroy network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.683824 kubelet[3217]: E0120 01:42:23.683624 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:23.683824 kubelet[3217]: E0120 01:42:23.683656 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be"} Jan 20 01:42:23.683824 kubelet[3217]: E0120 01:42:23.683679 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c28c5ae-f540-4875-a2fd-481f9d148cbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:23.683824 kubelet[3217]: E0120 01:42:23.683696 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c28c5ae-f540-4875-a2fd-481f9d148cbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:42:23.689319 containerd[1737]: time="2026-01-20T01:42:23.689285942Z" level=error msg="StopPodSandbox for \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\" failed" error="failed to destroy network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.689535 kubelet[3217]: E0120 01:42:23.689496 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:23.689535 kubelet[3217]: E0120 01:42:23.689532 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e"} Jan 20 01:42:23.689635 kubelet[3217]: E0120 01:42:23.689554 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8eac5578-1c74-4107-a02f-d780338d63d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:23.689635 kubelet[3217]: E0120 01:42:23.689573 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8eac5578-1c74-4107-a02f-d780338d63d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m78tf" podUID="8eac5578-1c74-4107-a02f-d780338d63d7" Jan 20 01:42:23.692284 containerd[1737]: time="2026-01-20T01:42:23.692249623Z" level=error msg="StopPodSandbox for \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\" failed" error="failed to destroy network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:23.692425 kubelet[3217]: E0120 01:42:23.692397 3217 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:23.692471 kubelet[3217]: E0120 01:42:23.692448 3217 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6"} Jan 20 01:42:23.692503 kubelet[3217]: E0120 01:42:23.692470 3217 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a793b124-6073-4604-9c24-ad5326cb3836\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:23.692503 kubelet[3217]: E0120 01:42:23.692487 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a793b124-6073-4604-9c24-ad5326cb3836\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:42:27.724759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834538113.mount: Deactivated successfully. Jan 20 01:42:28.621738 containerd[1737]: time="2026-01-20T01:42:28.621676918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:28.624819 containerd[1737]: time="2026-01-20T01:42:28.624420639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 20 01:42:28.626807 containerd[1737]: time="2026-01-20T01:42:28.626446839Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:28.664153 containerd[1737]: time="2026-01-20T01:42:28.664086050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:28.665333 containerd[1737]: time="2026-01-20T01:42:28.664703210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 5.118417309s" Jan 20 01:42:28.665333 containerd[1737]: time="2026-01-20T01:42:28.664739450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 20 01:42:28.920892 containerd[1737]: time="2026-01-20T01:42:28.920853724Z" level=info msg="CreateContainer within sandbox \"14c3702623315018810abeccc2bd055db47f080dd97e6a38ced0bb030c7ea8c2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 01:42:28.956048 containerd[1737]: time="2026-01-20T01:42:28.956003854Z" level=info msg="CreateContainer within sandbox \"14c3702623315018810abeccc2bd055db47f080dd97e6a38ced0bb030c7ea8c2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"035fc14488e5de1e7d0ab0869d29ae9118a681e464ef2d52aef9fa4d500dc2d2\"" Jan 20 01:42:28.957848 containerd[1737]: time="2026-01-20T01:42:28.956596894Z" level=info msg="StartContainer for \"035fc14488e5de1e7d0ab0869d29ae9118a681e464ef2d52aef9fa4d500dc2d2\"" Jan 20 01:42:28.984998 systemd[1]: Started cri-containerd-035fc14488e5de1e7d0ab0869d29ae9118a681e464ef2d52aef9fa4d500dc2d2.scope - libcontainer container 035fc14488e5de1e7d0ab0869d29ae9118a681e464ef2d52aef9fa4d500dc2d2. Jan 20 01:42:29.021064 containerd[1737]: time="2026-01-20T01:42:29.021017832Z" level=info msg="StartContainer for \"035fc14488e5de1e7d0ab0869d29ae9118a681e464ef2d52aef9fa4d500dc2d2\" returns successfully" Jan 20 01:42:29.283698 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 01:42:29.284055 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 01:42:29.411322 containerd[1737]: time="2026-01-20T01:42:29.411132824Z" level=info msg="StopPodSandbox for \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\"" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.526 [INFO][4473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.526 [INFO][4473] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" iface="eth0" netns="/var/run/netns/cni-bde8afcd-9b75-902b-d160-7196bb0878a2" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.527 [INFO][4473] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" iface="eth0" netns="/var/run/netns/cni-bde8afcd-9b75-902b-d160-7196bb0878a2" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.529 [INFO][4473] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" iface="eth0" netns="/var/run/netns/cni-bde8afcd-9b75-902b-d160-7196bb0878a2" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.529 [INFO][4473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.529 [INFO][4473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.554 [INFO][4482] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" HandleID="k8s-pod-network.78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.555 [INFO][4482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.555 [INFO][4482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.564 [WARNING][4482] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" HandleID="k8s-pod-network.78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.564 [INFO][4482] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" HandleID="k8s-pod-network.78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.566 [INFO][4482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:29.570815 containerd[1737]: 2026-01-20 01:42:29.569 [INFO][4473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:29.572433 containerd[1737]: time="2026-01-20T01:42:29.572087991Z" level=info msg="TearDown network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\" successfully" Jan 20 01:42:29.572433 containerd[1737]: time="2026-01-20T01:42:29.572122711Z" level=info msg="StopPodSandbox for \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\" returns successfully" Jan 20 01:42:29.574546 systemd[1]: run-netns-cni\x2dbde8afcd\x2d9b75\x2d902b\x2dd160\x2d7196bb0878a2.mount: Deactivated successfully. Jan 20 01:42:29.628757 kubelet[3217]: I0120 01:42:29.627998 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wvjz7" podStartSLOduration=1.429683128 podStartE2EDuration="16.627979807s" podCreationTimestamp="2026-01-20 01:42:13 +0000 UTC" firstStartedPulling="2026-01-20 01:42:13.467095691 +0000 UTC m=+27.164262452" lastFinishedPulling="2026-01-20 01:42:28.66539233 +0000 UTC m=+42.362559131" observedRunningTime="2026-01-20 01:42:29.627746966 +0000 UTC m=+43.324913727" watchObservedRunningTime="2026-01-20 01:42:29.627979807 +0000 UTC m=+43.325146608" Jan 20 01:42:29.665004 kubelet[3217]: I0120 01:42:29.664903 3217 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40f43ea6-fce8-4f79-952b-c7d866e60aed-whisker-ca-bundle\") pod \"40f43ea6-fce8-4f79-952b-c7d866e60aed\" (UID: \"40f43ea6-fce8-4f79-952b-c7d866e60aed\") " Jan 20 01:42:29.666214 kubelet[3217]: I0120 01:42:29.665572 3217 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40f43ea6-fce8-4f79-952b-c7d866e60aed-whisker-backend-key-pair\") pod \"40f43ea6-fce8-4f79-952b-c7d866e60aed\" (UID: \"40f43ea6-fce8-4f79-952b-c7d866e60aed\") " Jan 20 01:42:29.666214 kubelet[3217]: I0120 01:42:29.665611 3217 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc2sg\" (UniqueName: \"kubernetes.io/projected/40f43ea6-fce8-4f79-952b-c7d866e60aed-kube-api-access-bc2sg\") pod \"40f43ea6-fce8-4f79-952b-c7d866e60aed\" (UID: \"40f43ea6-fce8-4f79-952b-c7d866e60aed\") " Jan 20 01:42:29.666214 kubelet[3217]: I0120 01:42:29.665339 3217 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f43ea6-fce8-4f79-952b-c7d866e60aed-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "40f43ea6-fce8-4f79-952b-c7d866e60aed" (UID: "40f43ea6-fce8-4f79-952b-c7d866e60aed"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:42:29.669848 kubelet[3217]: I0120 01:42:29.668893 3217 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f43ea6-fce8-4f79-952b-c7d866e60aed-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "40f43ea6-fce8-4f79-952b-c7d866e60aed" (UID: "40f43ea6-fce8-4f79-952b-c7d866e60aed"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:42:29.670334 kubelet[3217]: I0120 01:42:29.670296 3217 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f43ea6-fce8-4f79-952b-c7d866e60aed-kube-api-access-bc2sg" (OuterVolumeSpecName: "kube-api-access-bc2sg") pod "40f43ea6-fce8-4f79-952b-c7d866e60aed" (UID: "40f43ea6-fce8-4f79-952b-c7d866e60aed"). InnerVolumeSpecName "kube-api-access-bc2sg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:42:29.766153 kubelet[3217]: I0120 01:42:29.766114 3217 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40f43ea6-fce8-4f79-952b-c7d866e60aed-whisker-ca-bundle\") on node \"ci-4081.3.6-n-e5d82fe73a\" DevicePath \"\"" Jan 20 01:42:29.766153 kubelet[3217]: I0120 01:42:29.766147 3217 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40f43ea6-fce8-4f79-952b-c7d866e60aed-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-e5d82fe73a\" DevicePath \"\"" Jan 20 01:42:29.766153 kubelet[3217]: I0120 01:42:29.766158 3217 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bc2sg\" (UniqueName: \"kubernetes.io/projected/40f43ea6-fce8-4f79-952b-c7d866e60aed-kube-api-access-bc2sg\") on node \"ci-4081.3.6-n-e5d82fe73a\" DevicePath \"\"" Jan 20 01:42:29.905734 systemd[1]: Removed slice kubepods-besteffort-pod40f43ea6_fce8_4f79_952b_c7d866e60aed.slice - libcontainer container kubepods-besteffort-pod40f43ea6_fce8_4f79_952b_c7d866e60aed.slice. Jan 20 01:42:29.946840 systemd[1]: var-lib-kubelet-pods-40f43ea6\x2dfce8\x2d4f79\x2d952b\x2dc7d866e60aed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbc2sg.mount: Deactivated successfully. Jan 20 01:42:29.946946 systemd[1]: var-lib-kubelet-pods-40f43ea6\x2dfce8\x2d4f79\x2d952b\x2dc7d866e60aed-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 01:42:29.996397 systemd[1]: Created slice kubepods-besteffort-poda7a9b064_5e91_49bb_b0db_fcf6fce9b0be.slice - libcontainer container kubepods-besteffort-poda7a9b064_5e91_49bb_b0db_fcf6fce9b0be.slice. Jan 20 01:42:30.067505 kubelet[3217]: I0120 01:42:30.067465 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7a9b064-5e91-49bb-b0db-fcf6fce9b0be-whisker-ca-bundle\") pod \"whisker-6f58966dbf-54hk5\" (UID: \"a7a9b064-5e91-49bb-b0db-fcf6fce9b0be\") " pod="calico-system/whisker-6f58966dbf-54hk5" Jan 20 01:42:30.067643 kubelet[3217]: I0120 01:42:30.067508 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a7a9b064-5e91-49bb-b0db-fcf6fce9b0be-whisker-backend-key-pair\") pod \"whisker-6f58966dbf-54hk5\" (UID: \"a7a9b064-5e91-49bb-b0db-fcf6fce9b0be\") " pod="calico-system/whisker-6f58966dbf-54hk5" Jan 20 01:42:30.067643 kubelet[3217]: I0120 01:42:30.067561 3217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwn9\" (UniqueName: \"kubernetes.io/projected/a7a9b064-5e91-49bb-b0db-fcf6fce9b0be-kube-api-access-7mwn9\") pod \"whisker-6f58966dbf-54hk5\" (UID: \"a7a9b064-5e91-49bb-b0db-fcf6fce9b0be\") " pod="calico-system/whisker-6f58966dbf-54hk5" Jan 20 01:42:30.302594 containerd[1737]: time="2026-01-20T01:42:30.302415760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f58966dbf-54hk5,Uid:a7a9b064-5e91-49bb-b0db-fcf6fce9b0be,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:30.420811 kubelet[3217]: I0120 01:42:30.419811 3217 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f43ea6-fce8-4f79-952b-c7d866e60aed" path="/var/lib/kubelet/pods/40f43ea6-fce8-4f79-952b-c7d866e60aed/volumes" Jan 20 01:42:30.467483 systemd-networkd[1356]: cali7604046333a: Link UP Jan 20 01:42:30.467678 systemd-networkd[1356]: cali7604046333a: Gained carrier Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.348 [INFO][4504] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.362 [INFO][4504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0 whisker-6f58966dbf- calico-system a7a9b064-5e91-49bb-b0db-fcf6fce9b0be 958 0 2026-01-20 01:42:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f58966dbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-e5d82fe73a whisker-6f58966dbf-54hk5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7604046333a [] [] }} ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Namespace="calico-system" Pod="whisker-6f58966dbf-54hk5" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.362 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Namespace="calico-system" Pod="whisker-6f58966dbf-54hk5" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.385 [INFO][4517] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" HandleID="k8s-pod-network.ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.385 [INFO][4517] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" HandleID="k8s-pod-network.ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e5d82fe73a", "pod":"whisker-6f58966dbf-54hk5", "timestamp":"2026-01-20 01:42:30.385136264 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e5d82fe73a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.385 [INFO][4517] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.385 [INFO][4517] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.385 [INFO][4517] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e5d82fe73a' Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.394 [INFO][4517] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.398 [INFO][4517] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.404 [INFO][4517] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.407 [INFO][4517] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.409 [INFO][4517] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.409 [INFO][4517] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.410 [INFO][4517] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74 Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.419 [INFO][4517] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.425 [INFO][4517] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.1/26] block=192.168.69.0/26 handle="k8s-pod-network.ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.425 [INFO][4517] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.1/26] handle="k8s-pod-network.ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.425 [INFO][4517] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:30.485956 containerd[1737]: 2026-01-20 01:42:30.425 [INFO][4517] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.1/26] IPv6=[] ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" HandleID="k8s-pod-network.ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" Jan 20 01:42:30.486554 containerd[1737]: 2026-01-20 01:42:30.427 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Namespace="calico-system" Pod="whisker-6f58966dbf-54hk5" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0", GenerateName:"whisker-6f58966dbf-", Namespace:"calico-system", SelfLink:"", UID:"a7a9b064-5e91-49bb-b0db-fcf6fce9b0be", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f58966dbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"", Pod:"whisker-6f58966dbf-54hk5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7604046333a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:30.486554 containerd[1737]: 2026-01-20 01:42:30.427 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.1/32] ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Namespace="calico-system" Pod="whisker-6f58966dbf-54hk5" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" Jan 20 01:42:30.486554 containerd[1737]: 2026-01-20 01:42:30.427 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7604046333a ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Namespace="calico-system" Pod="whisker-6f58966dbf-54hk5" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" Jan 20 01:42:30.486554 containerd[1737]: 2026-01-20 01:42:30.467 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Namespace="calico-system" Pod="whisker-6f58966dbf-54hk5" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" Jan 20 01:42:30.486554 containerd[1737]: 2026-01-20 01:42:30.468 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Namespace="calico-system" Pod="whisker-6f58966dbf-54hk5" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0", GenerateName:"whisker-6f58966dbf-", Namespace:"calico-system", SelfLink:"", UID:"a7a9b064-5e91-49bb-b0db-fcf6fce9b0be", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f58966dbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74", Pod:"whisker-6f58966dbf-54hk5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7604046333a", MAC:"06:60:bd:fe:1b:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:30.486554 containerd[1737]: 2026-01-20 01:42:30.483 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74" Namespace="calico-system" Pod="whisker-6f58966dbf-54hk5" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--6f58966dbf--54hk5-eth0" Jan 20 01:42:30.501675 containerd[1737]: time="2026-01-20T01:42:30.501445257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:30.501675 containerd[1737]: time="2026-01-20T01:42:30.501511657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:30.501675 containerd[1737]: time="2026-01-20T01:42:30.501525937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:30.501675 containerd[1737]: time="2026-01-20T01:42:30.501598577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:30.526969 systemd[1]: Started cri-containerd-ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74.scope - libcontainer container ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74. Jan 20 01:42:30.555202 containerd[1737]: time="2026-01-20T01:42:30.555081073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f58966dbf-54hk5,Uid:a7a9b064-5e91-49bb-b0db-fcf6fce9b0be,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff13e913d78b0547bb5ff4ff5c0855674adb8184142e301214817ed0d8089d74\"" Jan 20 01:42:30.557444 containerd[1737]: time="2026-01-20T01:42:30.557365393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:42:30.827318 containerd[1737]: time="2026-01-20T01:42:30.827195231Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:30.868857 containerd[1737]: time="2026-01-20T01:42:30.868773323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:42:30.869414 containerd[1737]: time="2026-01-20T01:42:30.868878163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:42:30.871552 kubelet[3217]: E0120 01:42:30.871422 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:42:30.872494 kubelet[3217]: E0120 01:42:30.871915 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:42:30.879729 kubelet[3217]: E0120 01:42:30.879641 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8f17a3a521ca43d5a97871bb0e325b25,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f58966dbf-54hk5_calico-system(a7a9b064-5e91-49bb-b0db-fcf6fce9b0be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:30.881920 containerd[1737]: time="2026-01-20T01:42:30.881886207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:42:31.110229 containerd[1737]: time="2026-01-20T01:42:31.110074634Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:31.112698 containerd[1737]: time="2026-01-20T01:42:31.112652675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:42:31.112974 containerd[1737]: time="2026-01-20T01:42:31.112682235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:42:31.113021 kubelet[3217]: E0120 01:42:31.112865 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:42:31.113021 kubelet[3217]: E0120 01:42:31.112912 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:42:31.113461 kubelet[3217]: E0120 01:42:31.113295 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f58966dbf-54hk5_calico-system(a7a9b064-5e91-49bb-b0db-fcf6fce9b0be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:31.115038 kubelet[3217]: E0120 01:42:31.114997 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:42:31.609165 kubelet[3217]: E0120 01:42:31.609109 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:42:32.240972 systemd-networkd[1356]: cali7604046333a: Gained IPv6LL Jan 20 01:42:34.417576 containerd[1737]: time="2026-01-20T01:42:34.416809693Z" level=info msg="StopPodSandbox for \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\"" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.462 [INFO][4742] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.463 [INFO][4742] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" iface="eth0" netns="/var/run/netns/cni-d36d1d3f-1a15-5340-5fb3-401261b0e903" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.463 [INFO][4742] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" iface="eth0" netns="/var/run/netns/cni-d36d1d3f-1a15-5340-5fb3-401261b0e903" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.463 [INFO][4742] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" iface="eth0" netns="/var/run/netns/cni-d36d1d3f-1a15-5340-5fb3-401261b0e903" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.463 [INFO][4742] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.463 [INFO][4742] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.483 [INFO][4750] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" HandleID="k8s-pod-network.53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.483 [INFO][4750] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.483 [INFO][4750] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.492 [WARNING][4750] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" HandleID="k8s-pod-network.53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.492 [INFO][4750] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" HandleID="k8s-pod-network.53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.493 [INFO][4750] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:34.498303 containerd[1737]: 2026-01-20 01:42:34.495 [INFO][4742] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:34.498694 containerd[1737]: time="2026-01-20T01:42:34.498469277Z" level=info msg="TearDown network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\" successfully" Jan 20 01:42:34.498694 containerd[1737]: time="2026-01-20T01:42:34.498503677Z" level=info msg="StopPodSandbox for \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\" returns successfully" Jan 20 01:42:34.500769 systemd[1]: run-netns-cni\x2dd36d1d3f\x2d1a15\x2d5340\x2d5fb3\x2d401261b0e903.mount: Deactivated successfully. Jan 20 01:42:34.502839 containerd[1737]: time="2026-01-20T01:42:34.502112158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qkv75,Uid:68f5545e-7661-40cf-baeb-c5c30a862135,Namespace:calico-system,Attempt:1,}" Jan 20 01:42:34.626097 systemd-networkd[1356]: calice84e2cd3b2: Link UP Jan 20 01:42:34.627028 systemd-networkd[1356]: calice84e2cd3b2: Gained carrier Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.550 [INFO][4757] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.563 [INFO][4757] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0 goldmane-666569f655- calico-system 68f5545e-7661-40cf-baeb-c5c30a862135 983 0 2026-01-20 01:42:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-e5d82fe73a goldmane-666569f655-qkv75 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calice84e2cd3b2 [] [] }} ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Namespace="calico-system" Pod="goldmane-666569f655-qkv75" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.563 [INFO][4757] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Namespace="calico-system" Pod="goldmane-666569f655-qkv75" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.584 [INFO][4769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" HandleID="k8s-pod-network.c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.584 [INFO][4769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" HandleID="k8s-pod-network.c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e5d82fe73a", "pod":"goldmane-666569f655-qkv75", "timestamp":"2026-01-20 01:42:34.584427862 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e5d82fe73a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.584 [INFO][4769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.584 [INFO][4769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.584 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e5d82fe73a' Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.594 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.598 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.602 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.604 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.605 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.606 [INFO][4769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.607 [INFO][4769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.614 [INFO][4769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.621 [INFO][4769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.2/26] block=192.168.69.0/26 handle="k8s-pod-network.c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.622 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.2/26] handle="k8s-pod-network.c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.622 [INFO][4769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:34.652675 containerd[1737]: 2026-01-20 01:42:34.622 [INFO][4769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.2/26] IPv6=[] ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" HandleID="k8s-pod-network.c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.653303 containerd[1737]: 2026-01-20 01:42:34.624 [INFO][4757] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Namespace="calico-system" Pod="goldmane-666569f655-qkv75" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"68f5545e-7661-40cf-baeb-c5c30a862135", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"", Pod:"goldmane-666569f655-qkv75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calice84e2cd3b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:34.653303 containerd[1737]: 2026-01-20 01:42:34.624 [INFO][4757] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.2/32] ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Namespace="calico-system" Pod="goldmane-666569f655-qkv75" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.653303 containerd[1737]: 2026-01-20 01:42:34.624 [INFO][4757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice84e2cd3b2 ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Namespace="calico-system" Pod="goldmane-666569f655-qkv75" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.653303 containerd[1737]: 2026-01-20 01:42:34.626 [INFO][4757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Namespace="calico-system" Pod="goldmane-666569f655-qkv75" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.653303 containerd[1737]: 2026-01-20 01:42:34.627 [INFO][4757] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Namespace="calico-system" Pod="goldmane-666569f655-qkv75" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"68f5545e-7661-40cf-baeb-c5c30a862135", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b", Pod:"goldmane-666569f655-qkv75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calice84e2cd3b2", MAC:"da:54:6a:cf:ac:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:34.653303 containerd[1737]: 2026-01-20 01:42:34.650 [INFO][4757] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b" Namespace="calico-system" Pod="goldmane-666569f655-qkv75" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:34.669788 containerd[1737]: time="2026-01-20T01:42:34.668964207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:34.669788 containerd[1737]: time="2026-01-20T01:42:34.669022127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:34.669788 containerd[1737]: time="2026-01-20T01:42:34.669037247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:34.669788 containerd[1737]: time="2026-01-20T01:42:34.669105607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:34.693961 systemd[1]: Started cri-containerd-c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b.scope - libcontainer container c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b. Jan 20 01:42:34.723748 containerd[1737]: time="2026-01-20T01:42:34.723663064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qkv75,Uid:68f5545e-7661-40cf-baeb-c5c30a862135,Namespace:calico-system,Attempt:1,} returns sandbox id \"c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b\"" Jan 20 01:42:34.726604 containerd[1737]: time="2026-01-20T01:42:34.725564824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:42:34.977513 containerd[1737]: time="2026-01-20T01:42:34.977403939Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:34.979830 containerd[1737]: time="2026-01-20T01:42:34.979777139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:42:34.979910 containerd[1737]: time="2026-01-20T01:42:34.979891979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:34.980092 kubelet[3217]: E0120 01:42:34.980038 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:42:34.980354 kubelet[3217]: E0120 01:42:34.980103 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:42:34.981727 kubelet[3217]: E0120 01:42:34.980955 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fqhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qkv75_calico-system(68f5545e-7661-40cf-baeb-c5c30a862135): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:34.982141 kubelet[3217]: E0120 01:42:34.982110 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:42:35.417569 containerd[1737]: time="2026-01-20T01:42:35.416316029Z" level=info msg="StopPodSandbox for \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\"" Jan 20 01:42:35.417569 containerd[1737]: time="2026-01-20T01:42:35.417350749Z" level=info msg="StopPodSandbox for \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\"" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.477 [INFO][4859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.477 [INFO][4859] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" iface="eth0" netns="/var/run/netns/cni-486afc76-c699-34c6-84da-8ffd48ef4714" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.478 [INFO][4859] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" iface="eth0" netns="/var/run/netns/cni-486afc76-c699-34c6-84da-8ffd48ef4714" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.480 [INFO][4859] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" iface="eth0" netns="/var/run/netns/cni-486afc76-c699-34c6-84da-8ffd48ef4714" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.480 [INFO][4859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.480 [INFO][4859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.505 [INFO][4876] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" HandleID="k8s-pod-network.48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.505 [INFO][4876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.505 [INFO][4876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.514 [WARNING][4876] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" HandleID="k8s-pod-network.48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.514 [INFO][4876] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" HandleID="k8s-pod-network.48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.517 [INFO][4876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:35.521700 containerd[1737]: 2026-01-20 01:42:35.520 [INFO][4859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:35.524430 containerd[1737]: time="2026-01-20T01:42:35.523927501Z" level=info msg="TearDown network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\" successfully" Jan 20 01:42:35.524430 containerd[1737]: time="2026-01-20T01:42:35.523968021Z" level=info msg="StopPodSandbox for \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\" returns successfully" Jan 20 01:42:35.524356 systemd[1]: run-netns-cni\x2d486afc76\x2dc699\x2d34c6\x2d84da\x2d8ffd48ef4714.mount: Deactivated successfully. Jan 20 01:42:35.526330 containerd[1737]: time="2026-01-20T01:42:35.526302901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588969c7f9-dsqq7,Uid:d821eeb9-a64e-4dc2-bbef-b0976a3bf49a,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.484 [INFO][4863] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.484 [INFO][4863] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" iface="eth0" netns="/var/run/netns/cni-5ec2ce3c-d7f7-745e-2785-c10796b0593d" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.484 [INFO][4863] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" iface="eth0" netns="/var/run/netns/cni-5ec2ce3c-d7f7-745e-2785-c10796b0593d" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.484 [INFO][4863] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" iface="eth0" netns="/var/run/netns/cni-5ec2ce3c-d7f7-745e-2785-c10796b0593d" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.484 [INFO][4863] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.484 [INFO][4863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.505 [INFO][4879] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" HandleID="k8s-pod-network.0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.505 [INFO][4879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.517 [INFO][4879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.531 [WARNING][4879] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" HandleID="k8s-pod-network.0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.531 [INFO][4879] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" HandleID="k8s-pod-network.0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.533 [INFO][4879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:35.536910 containerd[1737]: 2026-01-20 01:42:35.534 [INFO][4863] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:35.538545 containerd[1737]: time="2026-01-20T01:42:35.536989784Z" level=info msg="TearDown network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\" successfully" Jan 20 01:42:35.538545 containerd[1737]: time="2026-01-20T01:42:35.537011424Z" level=info msg="StopPodSandbox for \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\" returns successfully" Jan 20 01:42:35.539164 containerd[1737]: time="2026-01-20T01:42:35.539133385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m78tf,Uid:8eac5578-1c74-4107-a02f-d780338d63d7,Namespace:kube-system,Attempt:1,}" Jan 20 01:42:35.539669 systemd[1]: run-netns-cni\x2d5ec2ce3c\x2dd7f7\x2d745e\x2d2785\x2dc10796b0593d.mount: Deactivated successfully. Jan 20 01:42:35.615451 kubelet[3217]: E0120 01:42:35.615295 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:42:35.695036 systemd-networkd[1356]: cali13f65b2e1f1: Link UP Jan 20 01:42:35.696750 systemd-networkd[1356]: cali13f65b2e1f1: Gained carrier Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.584 [INFO][4890] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.598 [INFO][4890] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0 calico-apiserver-588969c7f9- calico-apiserver d821eeb9-a64e-4dc2-bbef-b0976a3bf49a 996 0 2026-01-20 01:42:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:588969c7f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-e5d82fe73a calico-apiserver-588969c7f9-dsqq7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali13f65b2e1f1 [] [] }} ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-dsqq7" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.598 [INFO][4890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-dsqq7" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.637 [INFO][4912] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" HandleID="k8s-pod-network.cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.637 [INFO][4912] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" HandleID="k8s-pod-network.cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-e5d82fe73a", "pod":"calico-apiserver-588969c7f9-dsqq7", "timestamp":"2026-01-20 01:42:35.637682334 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e5d82fe73a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.637 [INFO][4912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.638 [INFO][4912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.638 [INFO][4912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e5d82fe73a' Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.652 [INFO][4912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.660 [INFO][4912] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.665 [INFO][4912] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.667 [INFO][4912] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.672 [INFO][4912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.672 [INFO][4912] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.673 [INFO][4912] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.683 [INFO][4912] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.689 [INFO][4912] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.3/26] block=192.168.69.0/26 handle="k8s-pod-network.cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.689 [INFO][4912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.3/26] handle="k8s-pod-network.cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.689 [INFO][4912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:35.720953 containerd[1737]: 2026-01-20 01:42:35.689 [INFO][4912] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.3/26] IPv6=[] ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" HandleID="k8s-pod-network.cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.722188 containerd[1737]: 2026-01-20 01:42:35.692 [INFO][4890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-dsqq7" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0", GenerateName:"calico-apiserver-588969c7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d821eeb9-a64e-4dc2-bbef-b0976a3bf49a", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588969c7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"", Pod:"calico-apiserver-588969c7f9-dsqq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13f65b2e1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:35.722188 containerd[1737]: 2026-01-20 01:42:35.692 [INFO][4890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.3/32] ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-dsqq7" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.722188 containerd[1737]: 2026-01-20 01:42:35.692 [INFO][4890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13f65b2e1f1 ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-dsqq7" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.722188 containerd[1737]: 2026-01-20 01:42:35.697 [INFO][4890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-dsqq7" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.722188 containerd[1737]: 2026-01-20 01:42:35.699 [INFO][4890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-dsqq7" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0", GenerateName:"calico-apiserver-588969c7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d821eeb9-a64e-4dc2-bbef-b0976a3bf49a", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588969c7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f", Pod:"calico-apiserver-588969c7f9-dsqq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13f65b2e1f1", MAC:"42:8f:c9:73:bd:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:35.722188 containerd[1737]: 2026-01-20 01:42:35.718 [INFO][4890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-dsqq7" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:35.738694 containerd[1737]: time="2026-01-20T01:42:35.738486084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:35.738694 containerd[1737]: time="2026-01-20T01:42:35.738538524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:35.738694 containerd[1737]: time="2026-01-20T01:42:35.738553524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:35.738694 containerd[1737]: time="2026-01-20T01:42:35.738624564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:35.755953 systemd[1]: Started cri-containerd-cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f.scope - libcontainer container cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f. Jan 20 01:42:35.761280 systemd-networkd[1356]: calice84e2cd3b2: Gained IPv6LL Jan 20 01:42:35.812064 systemd-networkd[1356]: cali746d596c0e8: Link UP Jan 20 01:42:35.816938 systemd-networkd[1356]: cali746d596c0e8: Gained carrier Jan 20 01:42:35.837779 containerd[1737]: time="2026-01-20T01:42:35.837738953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588969c7f9-dsqq7,Uid:d821eeb9-a64e-4dc2-bbef-b0976a3bf49a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f\"" Jan 20 01:42:35.841007 containerd[1737]: time="2026-01-20T01:42:35.840959354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.611 [INFO][4900] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.641 [INFO][4900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0 coredns-674b8bbfcf- kube-system 8eac5578-1c74-4107-a02f-d780338d63d7 997 0 2026-01-20 01:41:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-e5d82fe73a coredns-674b8bbfcf-m78tf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali746d596c0e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m78tf" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.642 [INFO][4900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m78tf" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.680 [INFO][4922] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" HandleID="k8s-pod-network.340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.680 [INFO][4922] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" HandleID="k8s-pod-network.340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c0fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-e5d82fe73a", "pod":"coredns-674b8bbfcf-m78tf", "timestamp":"2026-01-20 01:42:35.680877387 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e5d82fe73a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.681 [INFO][4922] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.689 [INFO][4922] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.689 [INFO][4922] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e5d82fe73a' Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.755 [INFO][4922] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.760 [INFO][4922] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.770 [INFO][4922] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.774 [INFO][4922] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.777 [INFO][4922] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.777 [INFO][4922] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.779 [INFO][4922] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.785 [INFO][4922] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.796 [INFO][4922] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.4/26] block=192.168.69.0/26 handle="k8s-pod-network.340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.796 [INFO][4922] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.4/26] handle="k8s-pod-network.340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.796 [INFO][4922] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:35.847076 containerd[1737]: 2026-01-20 01:42:35.796 [INFO][4922] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.4/26] IPv6=[] ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" HandleID="k8s-pod-network.340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.848024 containerd[1737]: 2026-01-20 01:42:35.801 [INFO][4900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m78tf" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8eac5578-1c74-4107-a02f-d780338d63d7", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"", Pod:"coredns-674b8bbfcf-m78tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali746d596c0e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:35.848024 containerd[1737]: 2026-01-20 01:42:35.801 [INFO][4900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.4/32] ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m78tf" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.848024 containerd[1737]: 2026-01-20 01:42:35.801 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali746d596c0e8 ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m78tf" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.848024 containerd[1737]: 2026-01-20 01:42:35.816 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m78tf" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.848024 containerd[1737]: 2026-01-20 01:42:35.818 [INFO][4900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m78tf" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8eac5578-1c74-4107-a02f-d780338d63d7", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b", Pod:"coredns-674b8bbfcf-m78tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali746d596c0e8", MAC:"4a:8e:cb:2b:7c:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:35.848024 containerd[1737]: 2026-01-20 01:42:35.843 [INFO][4900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m78tf" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:35.866824 kubelet[3217]: I0120 01:42:35.866484 3217 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:42:36.139267 containerd[1737]: time="2026-01-20T01:42:36.113612395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:36.139267 containerd[1737]: time="2026-01-20T01:42:36.113670995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:36.139267 containerd[1737]: time="2026-01-20T01:42:36.113687795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:36.139267 containerd[1737]: time="2026-01-20T01:42:36.113777795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:36.179018 systemd[1]: Started cri-containerd-340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b.scope - libcontainer container 340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b. Jan 20 01:42:36.218589 containerd[1737]: time="2026-01-20T01:42:36.218465986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m78tf,Uid:8eac5578-1c74-4107-a02f-d780338d63d7,Namespace:kube-system,Attempt:1,} returns sandbox id \"340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b\"" Jan 20 01:42:36.238520 containerd[1737]: time="2026-01-20T01:42:36.238393512Z" level=info msg="CreateContainer within sandbox \"340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:42:36.271835 containerd[1737]: time="2026-01-20T01:42:36.271740362Z" level=info msg="CreateContainer within sandbox \"340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08c7ac75b70ca015bc09776900cb8a8cad24edaedfb22ea8dc8d77bad4889973\"" Jan 20 01:42:36.276014 containerd[1737]: time="2026-01-20T01:42:36.275979043Z" level=info msg="StartContainer for \"08c7ac75b70ca015bc09776900cb8a8cad24edaedfb22ea8dc8d77bad4889973\"" Jan 20 01:42:36.320950 systemd[1]: Started cri-containerd-08c7ac75b70ca015bc09776900cb8a8cad24edaedfb22ea8dc8d77bad4889973.scope - libcontainer container 08c7ac75b70ca015bc09776900cb8a8cad24edaedfb22ea8dc8d77bad4889973. Jan 20 01:42:36.356522 containerd[1737]: time="2026-01-20T01:42:36.356398627Z" level=info msg="StartContainer for \"08c7ac75b70ca015bc09776900cb8a8cad24edaedfb22ea8dc8d77bad4889973\" returns successfully" Jan 20 01:42:36.417159 containerd[1737]: time="2026-01-20T01:42:36.416640285Z" level=info msg="StopPodSandbox for \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\"" Jan 20 01:42:36.476968 containerd[1737]: time="2026-01-20T01:42:36.476927863Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:36.479755 containerd[1737]: time="2026-01-20T01:42:36.479593983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:42:36.479755 containerd[1737]: time="2026-01-20T01:42:36.479730703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:36.480289 kubelet[3217]: E0120 01:42:36.479820 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:36.480289 kubelet[3217]: E0120 01:42:36.479859 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:36.490678 kubelet[3217]: E0120 01:42:36.490620 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgvkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-588969c7f9-dsqq7_calico-apiserver(d821eeb9-a64e-4dc2-bbef-b0976a3bf49a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:36.492495 kubelet[3217]: E0120 01:42:36.492470 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.461 [INFO][5133] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.462 [INFO][5133] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" iface="eth0" netns="/var/run/netns/cni-0c7f5cab-f2b4-8636-cca5-e35d6d67cd79" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.462 [INFO][5133] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" iface="eth0" netns="/var/run/netns/cni-0c7f5cab-f2b4-8636-cca5-e35d6d67cd79" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.462 [INFO][5133] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" iface="eth0" netns="/var/run/netns/cni-0c7f5cab-f2b4-8636-cca5-e35d6d67cd79" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.463 [INFO][5133] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.463 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.486 [INFO][5140] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" HandleID="k8s-pod-network.b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.487 [INFO][5140] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.487 [INFO][5140] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.499 [WARNING][5140] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" HandleID="k8s-pod-network.b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.499 [INFO][5140] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" HandleID="k8s-pod-network.b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.501 [INFO][5140] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:36.504772 containerd[1737]: 2026-01-20 01:42:36.503 [INFO][5133] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:36.505594 containerd[1737]: time="2026-01-20T01:42:36.505186871Z" level=info msg="TearDown network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\" successfully" Jan 20 01:42:36.505594 containerd[1737]: time="2026-01-20T01:42:36.505213711Z" level=info msg="StopPodSandbox for \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\" returns successfully" Jan 20 01:42:36.506209 containerd[1737]: time="2026-01-20T01:42:36.505866951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dd6fbd444-88ccv,Uid:97a804d9-65a3-4df8-a009-6289887849fb,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:42:36.526701 systemd[1]: run-netns-cni\x2d0c7f5cab\x2df2b4\x2d8636\x2dcca5\x2de35d6d67cd79.mount: Deactivated successfully. Jan 20 01:42:36.639442 kubelet[3217]: E0120 01:42:36.639101 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:42:36.646887 kubelet[3217]: E0120 01:42:36.639120 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:42:36.675970 systemd-networkd[1356]: calif1c67e0426d: Link UP Jan 20 01:42:36.676553 systemd-networkd[1356]: calif1c67e0426d: Gained carrier Jan 20 01:42:36.692407 kubelet[3217]: I0120 01:42:36.692354 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m78tf" podStartSLOduration=43.692336566 podStartE2EDuration="43.692336566s" podCreationTimestamp="2026-01-20 01:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:36.661357637 +0000 UTC m=+50.358524438" watchObservedRunningTime="2026-01-20 01:42:36.692336566 +0000 UTC m=+50.389503367" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.557 [INFO][5146] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.570 [INFO][5146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0 calico-apiserver-7dd6fbd444- calico-apiserver 97a804d9-65a3-4df8-a009-6289887849fb 1020 0 2026-01-20 01:42:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dd6fbd444 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-e5d82fe73a calico-apiserver-7dd6fbd444-88ccv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1c67e0426d [] [] }} ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Namespace="calico-apiserver" Pod="calico-apiserver-7dd6fbd444-88ccv" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.570 [INFO][5146] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Namespace="calico-apiserver" Pod="calico-apiserver-7dd6fbd444-88ccv" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.593 [INFO][5159] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" HandleID="k8s-pod-network.821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.593 [INFO][5159] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" HandleID="k8s-pod-network.821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-e5d82fe73a", "pod":"calico-apiserver-7dd6fbd444-88ccv", "timestamp":"2026-01-20 01:42:36.593246457 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e5d82fe73a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.593 [INFO][5159] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.593 [INFO][5159] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.593 [INFO][5159] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e5d82fe73a' Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.605 [INFO][5159] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.609 [INFO][5159] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.617 [INFO][5159] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.620 [INFO][5159] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.628 [INFO][5159] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.629 [INFO][5159] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.632 [INFO][5159] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.646 [INFO][5159] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.664 [INFO][5159] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.5/26] block=192.168.69.0/26 handle="k8s-pod-network.821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.664 [INFO][5159] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.5/26] handle="k8s-pod-network.821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.664 [INFO][5159] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:36.706533 containerd[1737]: 2026-01-20 01:42:36.664 [INFO][5159] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.5/26] IPv6=[] ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" HandleID="k8s-pod-network.821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.707202 containerd[1737]: 2026-01-20 01:42:36.671 [INFO][5146] cni-plugin/k8s.go 418: Populated endpoint ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Namespace="calico-apiserver" Pod="calico-apiserver-7dd6fbd444-88ccv" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0", GenerateName:"calico-apiserver-7dd6fbd444-", Namespace:"calico-apiserver", SelfLink:"", UID:"97a804d9-65a3-4df8-a009-6289887849fb", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dd6fbd444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"", Pod:"calico-apiserver-7dd6fbd444-88ccv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1c67e0426d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:36.707202 containerd[1737]: 2026-01-20 01:42:36.672 [INFO][5146] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.5/32] ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Namespace="calico-apiserver" Pod="calico-apiserver-7dd6fbd444-88ccv" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.707202 containerd[1737]: 2026-01-20 01:42:36.672 [INFO][5146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1c67e0426d ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Namespace="calico-apiserver" Pod="calico-apiserver-7dd6fbd444-88ccv" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.707202 containerd[1737]: 2026-01-20 01:42:36.676 [INFO][5146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Namespace="calico-apiserver" Pod="calico-apiserver-7dd6fbd444-88ccv" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.707202 containerd[1737]: 2026-01-20 01:42:36.677 [INFO][5146] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Namespace="calico-apiserver" Pod="calico-apiserver-7dd6fbd444-88ccv" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0", GenerateName:"calico-apiserver-7dd6fbd444-", Namespace:"calico-apiserver", SelfLink:"", UID:"97a804d9-65a3-4df8-a009-6289887849fb", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dd6fbd444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd", Pod:"calico-apiserver-7dd6fbd444-88ccv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1c67e0426d", MAC:"c2:2b:4b:9a:c9:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:36.707202 containerd[1737]: 2026-01-20 01:42:36.703 [INFO][5146] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd" Namespace="calico-apiserver" Pod="calico-apiserver-7dd6fbd444-88ccv" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:36.741802 containerd[1737]: time="2026-01-20T01:42:36.740992781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:36.741802 containerd[1737]: time="2026-01-20T01:42:36.741052701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:36.741802 containerd[1737]: time="2026-01-20T01:42:36.741073621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:36.741802 containerd[1737]: time="2026-01-20T01:42:36.741262821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:36.771436 kubelet[3217]: I0120 01:42:36.771216 3217 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:42:36.778284 systemd[1]: Started cri-containerd-821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd.scope - libcontainer container 821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd. Jan 20 01:42:36.829375 containerd[1737]: time="2026-01-20T01:42:36.829274127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dd6fbd444-88ccv,Uid:97a804d9-65a3-4df8-a009-6289887849fb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd\"" Jan 20 01:42:36.840078 containerd[1737]: time="2026-01-20T01:42:36.840035250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:42:36.848909 systemd-networkd[1356]: cali13f65b2e1f1: Gained IPv6LL Jan 20 01:42:37.062188 containerd[1737]: time="2026-01-20T01:42:37.061969476Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:37.065883 containerd[1737]: time="2026-01-20T01:42:37.065763997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:42:37.065883 containerd[1737]: time="2026-01-20T01:42:37.065838157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:37.066175 kubelet[3217]: E0120 01:42:37.066134 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:37.066244 kubelet[3217]: E0120 01:42:37.066184 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:37.066363 kubelet[3217]: E0120 01:42:37.066310 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xth98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dd6fbd444-88ccv_calico-apiserver(97a804d9-65a3-4df8-a009-6289887849fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:37.067704 kubelet[3217]: E0120 01:42:37.067674 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:42:37.168934 systemd-networkd[1356]: cali746d596c0e8: Gained IPv6LL Jan 20 01:42:37.203867 kernel: bpftool[5237]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 01:42:37.417283 containerd[1737]: time="2026-01-20T01:42:37.416872781Z" level=info msg="StopPodSandbox for \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\"" Jan 20 01:42:37.525926 systemd[1]: run-containerd-runc-k8s.io-821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd-runc.E0S8yt.mount: Deactivated successfully. Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.479 [INFO][5280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.480 [INFO][5280] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" iface="eth0" netns="/var/run/netns/cni-711360d1-92ad-2bf6-981a-f3ea940188dd" Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.480 [INFO][5280] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" iface="eth0" netns="/var/run/netns/cni-711360d1-92ad-2bf6-981a-f3ea940188dd" Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.481 [INFO][5280] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" iface="eth0" netns="/var/run/netns/cni-711360d1-92ad-2bf6-981a-f3ea940188dd" Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.481 [INFO][5280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.481 [INFO][5280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.511 [INFO][5288] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" HandleID="k8s-pod-network.a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.511 [INFO][5288] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.511 [INFO][5288] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.525 [WARNING][5288] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" HandleID="k8s-pod-network.a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.525 [INFO][5288] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" HandleID="k8s-pod-network.a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.527 [INFO][5288] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:37.533939 containerd[1737]: 2026-01-20 01:42:37.532 [INFO][5280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:37.536987 systemd[1]: run-netns-cni\x2d711360d1\x2d92ad\x2d2bf6\x2d981a\x2df3ea940188dd.mount: Deactivated successfully. Jan 20 01:42:37.537680 containerd[1737]: time="2026-01-20T01:42:37.537043656Z" level=info msg="TearDown network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\" successfully" Jan 20 01:42:37.537680 containerd[1737]: time="2026-01-20T01:42:37.537074256Z" level=info msg="StopPodSandbox for \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\" returns successfully" Jan 20 01:42:37.540388 containerd[1737]: time="2026-01-20T01:42:37.538216777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588969c7f9-g5sn6,Uid:8c28c5ae-f540-4875-a2fd-481f9d148cbd,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:42:37.592366 systemd-networkd[1356]: vxlan.calico: Link UP Jan 20 01:42:37.592371 systemd-networkd[1356]: vxlan.calico: Gained carrier Jan 20 01:42:37.658443 kubelet[3217]: E0120 01:42:37.657477 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:42:37.661232 kubelet[3217]: E0120 01:42:37.661189 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:42:37.783286 systemd-networkd[1356]: cali9baf8d6c635: Link UP Jan 20 01:42:37.784467 systemd-networkd[1356]: cali9baf8d6c635: Gained carrier Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.629 [INFO][5306] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0 calico-apiserver-588969c7f9- calico-apiserver 8c28c5ae-f540-4875-a2fd-481f9d148cbd 1055 0 2026-01-20 01:42:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:588969c7f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-e5d82fe73a calico-apiserver-588969c7f9-g5sn6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9baf8d6c635 [] [] }} ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-g5sn6" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.629 [INFO][5306] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-g5sn6" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.697 [INFO][5341] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" HandleID="k8s-pod-network.5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.698 [INFO][5341] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" HandleID="k8s-pod-network.5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-e5d82fe73a", "pod":"calico-apiserver-588969c7f9-g5sn6", "timestamp":"2026-01-20 01:42:37.697653384 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e5d82fe73a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.698 [INFO][5341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.698 [INFO][5341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.698 [INFO][5341] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e5d82fe73a' Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.712 [INFO][5341] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.720 [INFO][5341] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.729 [INFO][5341] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.732 [INFO][5341] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.734 [INFO][5341] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.735 [INFO][5341] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.749 [INFO][5341] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.756 [INFO][5341] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.776 [INFO][5341] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.6/26] block=192.168.69.0/26 handle="k8s-pod-network.5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.776 [INFO][5341] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.6/26] handle="k8s-pod-network.5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.776 [INFO][5341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:37.811333 containerd[1737]: 2026-01-20 01:42:37.776 [INFO][5341] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.6/26] IPv6=[] ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" HandleID="k8s-pod-network.5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.812673 containerd[1737]: 2026-01-20 01:42:37.779 [INFO][5306] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-g5sn6" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0", GenerateName:"calico-apiserver-588969c7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c28c5ae-f540-4875-a2fd-481f9d148cbd", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588969c7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"", Pod:"calico-apiserver-588969c7f9-g5sn6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9baf8d6c635", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:37.812673 containerd[1737]: 2026-01-20 01:42:37.780 [INFO][5306] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.6/32] ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-g5sn6" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.812673 containerd[1737]: 2026-01-20 01:42:37.780 [INFO][5306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9baf8d6c635 ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-g5sn6" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.812673 containerd[1737]: 2026-01-20 01:42:37.785 [INFO][5306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-g5sn6" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.812673 containerd[1737]: 2026-01-20 01:42:37.786 [INFO][5306] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-g5sn6" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0", GenerateName:"calico-apiserver-588969c7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c28c5ae-f540-4875-a2fd-481f9d148cbd", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588969c7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf", Pod:"calico-apiserver-588969c7f9-g5sn6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9baf8d6c635", MAC:"6a:ae:de:9a:46:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:37.812673 containerd[1737]: 2026-01-20 01:42:37.808 [INFO][5306] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf" Namespace="calico-apiserver" Pod="calico-apiserver-588969c7f9-g5sn6" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:37.831838 containerd[1737]: time="2026-01-20T01:42:37.831682744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:37.831838 containerd[1737]: time="2026-01-20T01:42:37.831737744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:37.831838 containerd[1737]: time="2026-01-20T01:42:37.831752784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:37.832166 containerd[1737]: time="2026-01-20T01:42:37.832127744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:37.856949 systemd[1]: Started cri-containerd-5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf.scope - libcontainer container 5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf. Jan 20 01:42:37.892920 containerd[1737]: time="2026-01-20T01:42:37.892876722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-588969c7f9-g5sn6,Uid:8c28c5ae-f540-4875-a2fd-481f9d148cbd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf\"" Jan 20 01:42:37.898854 containerd[1737]: time="2026-01-20T01:42:37.898648563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:42:37.937000 systemd-networkd[1356]: calif1c67e0426d: Gained IPv6LL Jan 20 01:42:38.145727 containerd[1737]: time="2026-01-20T01:42:38.145549716Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:38.149024 containerd[1737]: time="2026-01-20T01:42:38.148881957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:42:38.149024 containerd[1737]: time="2026-01-20T01:42:38.148999837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:38.149264 kubelet[3217]: E0120 01:42:38.149217 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:38.149335 kubelet[3217]: E0120 01:42:38.149274 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:38.152526 kubelet[3217]: E0120 01:42:38.152475 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddcnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-588969c7f9-g5sn6_calico-apiserver(8c28c5ae-f540-4875-a2fd-481f9d148cbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:38.154599 kubelet[3217]: E0120 01:42:38.153619 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:42:38.417868 containerd[1737]: time="2026-01-20T01:42:38.416735797Z" level=info msg="StopPodSandbox for \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\"" Jan 20 01:42:38.417868 containerd[1737]: time="2026-01-20T01:42:38.417583077Z" level=info msg="StopPodSandbox for \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\"" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.478 [INFO][5467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.478 [INFO][5467] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" iface="eth0" netns="/var/run/netns/cni-8d793d1b-d565-4d0b-6cbd-baea7a4ebe8c" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.479 [INFO][5467] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" iface="eth0" netns="/var/run/netns/cni-8d793d1b-d565-4d0b-6cbd-baea7a4ebe8c" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.479 [INFO][5467] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" iface="eth0" netns="/var/run/netns/cni-8d793d1b-d565-4d0b-6cbd-baea7a4ebe8c" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.480 [INFO][5467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.480 [INFO][5467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.513 [INFO][5480] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" HandleID="k8s-pod-network.2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.514 [INFO][5480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.514 [INFO][5480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.528 [WARNING][5480] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" HandleID="k8s-pod-network.2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.528 [INFO][5480] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" HandleID="k8s-pod-network.2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.532 [INFO][5480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:38.535859 containerd[1737]: 2026-01-20 01:42:38.534 [INFO][5467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:38.538912 containerd[1737]: time="2026-01-20T01:42:38.538245353Z" level=info msg="TearDown network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\" successfully" Jan 20 01:42:38.538912 containerd[1737]: time="2026-01-20T01:42:38.538285553Z" level=info msg="StopPodSandbox for \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\" returns successfully" Jan 20 01:42:38.538950 systemd[1]: run-netns-cni\x2d8d793d1b\x2dd565\x2d4d0b\x2d6cbd\x2dbaea7a4ebe8c.mount: Deactivated successfully. Jan 20 01:42:38.540696 containerd[1737]: time="2026-01-20T01:42:38.540638113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddgxn,Uid:138649de-b257-4de2-b470-3f54b1f24475,Namespace:kube-system,Attempt:1,}" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.496 [INFO][5466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.496 [INFO][5466] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" iface="eth0" netns="/var/run/netns/cni-f74761eb-d8d5-7092-65d2-b9cf5f315dbd" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.497 [INFO][5466] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" iface="eth0" netns="/var/run/netns/cni-f74761eb-d8d5-7092-65d2-b9cf5f315dbd" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.498 [INFO][5466] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" iface="eth0" netns="/var/run/netns/cni-f74761eb-d8d5-7092-65d2-b9cf5f315dbd" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.499 [INFO][5466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.499 [INFO][5466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.544 [INFO][5485] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" HandleID="k8s-pod-network.b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.544 [INFO][5485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.544 [INFO][5485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.554 [WARNING][5485] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" HandleID="k8s-pod-network.b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.556 [INFO][5485] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" HandleID="k8s-pod-network.b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.557 [INFO][5485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:38.564851 containerd[1737]: 2026-01-20 01:42:38.561 [INFO][5466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:38.567327 containerd[1737]: time="2026-01-20T01:42:38.565849761Z" level=info msg="TearDown network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\" successfully" Jan 20 01:42:38.567327 containerd[1737]: time="2026-01-20T01:42:38.565904761Z" level=info msg="StopPodSandbox for \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\" returns successfully" Jan 20 01:42:38.567557 systemd[1]: run-netns-cni\x2df74761eb\x2dd8d5\x2d7092\x2d65d2\x2db9cf5f315dbd.mount: Deactivated successfully. Jan 20 01:42:38.571981 containerd[1737]: time="2026-01-20T01:42:38.571946883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59886bc69c-2p6tc,Uid:a793b124-6073-4604-9c24-ad5326cb3836,Namespace:calico-system,Attempt:1,}" Jan 20 01:42:38.664501 kubelet[3217]: E0120 01:42:38.664460 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:42:38.664878 kubelet[3217]: E0120 01:42:38.664264 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:42:38.746114 systemd-networkd[1356]: cali58367ec4bb5: Link UP Jan 20 01:42:38.747105 systemd-networkd[1356]: cali58367ec4bb5: Gained carrier Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.619 [INFO][5493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0 coredns-674b8bbfcf- kube-system 138649de-b257-4de2-b470-3f54b1f24475 1073 0 2026-01-20 01:41:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-e5d82fe73a coredns-674b8bbfcf-ddgxn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali58367ec4bb5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddgxn" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.619 [INFO][5493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddgxn" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.657 [INFO][5515] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" HandleID="k8s-pod-network.a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.657 [INFO][5515] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" HandleID="k8s-pod-network.a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-e5d82fe73a", "pod":"coredns-674b8bbfcf-ddgxn", "timestamp":"2026-01-20 01:42:38.657152908 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e5d82fe73a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.657 [INFO][5515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.657 [INFO][5515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.657 [INFO][5515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e5d82fe73a' Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.670 [INFO][5515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.682 [INFO][5515] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.704 [INFO][5515] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.708 [INFO][5515] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.714 [INFO][5515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.714 [INFO][5515] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.717 [INFO][5515] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25 Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.726 [INFO][5515] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.736 [INFO][5515] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.7/26] block=192.168.69.0/26 handle="k8s-pod-network.a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.737 [INFO][5515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.7/26] handle="k8s-pod-network.a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.737 [INFO][5515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:38.770678 containerd[1737]: 2026-01-20 01:42:38.737 [INFO][5515] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.7/26] IPv6=[] ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" HandleID="k8s-pod-network.a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.772962 containerd[1737]: 2026-01-20 01:42:38.740 [INFO][5493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddgxn" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"138649de-b257-4de2-b470-3f54b1f24475", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"", Pod:"coredns-674b8bbfcf-ddgxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58367ec4bb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:38.772962 containerd[1737]: 2026-01-20 01:42:38.740 [INFO][5493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.7/32] ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddgxn" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.772962 containerd[1737]: 2026-01-20 01:42:38.740 [INFO][5493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58367ec4bb5 ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddgxn" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.772962 containerd[1737]: 2026-01-20 01:42:38.747 [INFO][5493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddgxn" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.772962 containerd[1737]: 2026-01-20 01:42:38.750 [INFO][5493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddgxn" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"138649de-b257-4de2-b470-3f54b1f24475", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25", Pod:"coredns-674b8bbfcf-ddgxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58367ec4bb5", MAC:"ba:d7:7e:ea:b5:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:38.772962 containerd[1737]: 2026-01-20 01:42:38.765 [INFO][5493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddgxn" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:38.797417 containerd[1737]: time="2026-01-20T01:42:38.797152909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:38.797417 containerd[1737]: time="2026-01-20T01:42:38.797204109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:38.797417 containerd[1737]: time="2026-01-20T01:42:38.797220109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:38.798624 containerd[1737]: time="2026-01-20T01:42:38.798519630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:38.818943 systemd[1]: Started cri-containerd-a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25.scope - libcontainer container a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25. Jan 20 01:42:38.847207 systemd-networkd[1356]: califdfd89bb446: Link UP Jan 20 01:42:38.847421 systemd-networkd[1356]: califdfd89bb446: Gained carrier Jan 20 01:42:38.874667 containerd[1737]: time="2026-01-20T01:42:38.874466212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddgxn,Uid:138649de-b257-4de2-b470-3f54b1f24475,Namespace:kube-system,Attempt:1,} returns sandbox id \"a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25\"" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.659 [INFO][5505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0 calico-kube-controllers-59886bc69c- calico-system a793b124-6073-4604-9c24-ad5326cb3836 1074 0 2026-01-20 01:42:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59886bc69c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-e5d82fe73a calico-kube-controllers-59886bc69c-2p6tc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califdfd89bb446 [] [] }} ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Namespace="calico-system" Pod="calico-kube-controllers-59886bc69c-2p6tc" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.660 [INFO][5505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Namespace="calico-system" Pod="calico-kube-controllers-59886bc69c-2p6tc" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.710 [INFO][5524] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" HandleID="k8s-pod-network.d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.710 [INFO][5524] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" HandleID="k8s-pod-network.d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e5d82fe73a", "pod":"calico-kube-controllers-59886bc69c-2p6tc", "timestamp":"2026-01-20 01:42:38.710141524 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e5d82fe73a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.710 [INFO][5524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.737 [INFO][5524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.737 [INFO][5524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e5d82fe73a' Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.771 [INFO][5524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.779 [INFO][5524] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.802 [INFO][5524] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.806 [INFO][5524] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.809 [INFO][5524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.809 [INFO][5524] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.810 [INFO][5524] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2 Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.821 [INFO][5524] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.836 [INFO][5524] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.8/26] block=192.168.69.0/26 handle="k8s-pod-network.d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.836 [INFO][5524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.8/26] handle="k8s-pod-network.d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.836 [INFO][5524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:38.875924 containerd[1737]: 2026-01-20 01:42:38.836 [INFO][5524] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.8/26] IPv6=[] ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" HandleID="k8s-pod-network.d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.877482 containerd[1737]: 2026-01-20 01:42:38.840 [INFO][5505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Namespace="calico-system" Pod="calico-kube-controllers-59886bc69c-2p6tc" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0", GenerateName:"calico-kube-controllers-59886bc69c-", Namespace:"calico-system", SelfLink:"", UID:"a793b124-6073-4604-9c24-ad5326cb3836", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59886bc69c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"", Pod:"calico-kube-controllers-59886bc69c-2p6tc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califdfd89bb446", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:38.877482 containerd[1737]: 2026-01-20 01:42:38.840 [INFO][5505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.8/32] ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Namespace="calico-system" Pod="calico-kube-controllers-59886bc69c-2p6tc" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.877482 containerd[1737]: 2026-01-20 01:42:38.840 [INFO][5505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdfd89bb446 ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Namespace="calico-system" Pod="calico-kube-controllers-59886bc69c-2p6tc" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.877482 containerd[1737]: 2026-01-20 01:42:38.846 [INFO][5505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Namespace="calico-system" Pod="calico-kube-controllers-59886bc69c-2p6tc" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.877482 containerd[1737]: 2026-01-20 01:42:38.846 [INFO][5505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Namespace="calico-system" Pod="calico-kube-controllers-59886bc69c-2p6tc" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0", GenerateName:"calico-kube-controllers-59886bc69c-", Namespace:"calico-system", SelfLink:"", UID:"a793b124-6073-4604-9c24-ad5326cb3836", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59886bc69c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2", Pod:"calico-kube-controllers-59886bc69c-2p6tc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califdfd89bb446", MAC:"66:ce:6c:67:df:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:38.877482 containerd[1737]: 2026-01-20 01:42:38.864 [INFO][5505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2" Namespace="calico-system" Pod="calico-kube-controllers-59886bc69c-2p6tc" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:38.899222 containerd[1737]: time="2026-01-20T01:42:38.899027297Z" level=info msg="CreateContainer within sandbox \"a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:42:38.906815 containerd[1737]: time="2026-01-20T01:42:38.906612977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:38.906815 containerd[1737]: time="2026-01-20T01:42:38.906691057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:38.906815 containerd[1737]: time="2026-01-20T01:42:38.906711897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:38.907327 containerd[1737]: time="2026-01-20T01:42:38.907277378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:38.922933 systemd[1]: Started cri-containerd-d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2.scope - libcontainer container d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2. Jan 20 01:42:38.926426 containerd[1737]: time="2026-01-20T01:42:38.925651379Z" level=info msg="CreateContainer within sandbox \"a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f189a2b8f548e56d8908dc5117288921a8210d77b98214672d068c62896247cc\"" Jan 20 01:42:38.926909 containerd[1737]: time="2026-01-20T01:42:38.926736779Z" level=info msg="StartContainer for \"f189a2b8f548e56d8908dc5117288921a8210d77b98214672d068c62896247cc\"" Jan 20 01:42:38.961039 systemd[1]: Started cri-containerd-f189a2b8f548e56d8908dc5117288921a8210d77b98214672d068c62896247cc.scope - libcontainer container f189a2b8f548e56d8908dc5117288921a8210d77b98214672d068c62896247cc. Jan 20 01:42:38.991366 containerd[1737]: time="2026-01-20T01:42:38.991328626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59886bc69c-2p6tc,Uid:a793b124-6073-4604-9c24-ad5326cb3836,Namespace:calico-system,Attempt:1,} returns sandbox id \"d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2\"" Jan 20 01:42:38.993742 containerd[1737]: time="2026-01-20T01:42:38.993599506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:42:39.004670 containerd[1737]: time="2026-01-20T01:42:39.004397267Z" level=info msg="StartContainer for \"f189a2b8f548e56d8908dc5117288921a8210d77b98214672d068c62896247cc\" returns successfully" Jan 20 01:42:39.261910 containerd[1737]: time="2026-01-20T01:42:39.261752492Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:39.263955 containerd[1737]: time="2026-01-20T01:42:39.263916372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:42:39.264207 kubelet[3217]: E0120 01:42:39.264170 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:42:39.264265 kubelet[3217]: E0120 01:42:39.264219 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:42:39.264380 kubelet[3217]: E0120 01:42:39.264336 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhlh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59886bc69c-2p6tc_calico-system(a793b124-6073-4604-9c24-ad5326cb3836): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:39.264762 containerd[1737]: time="2026-01-20T01:42:39.263977292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:42:39.265760 kubelet[3217]: E0120 01:42:39.265680 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:42:39.417458 containerd[1737]: time="2026-01-20T01:42:39.417317107Z" level=info msg="StopPodSandbox for \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\"" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.460 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.460 [INFO][5678] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" iface="eth0" netns="/var/run/netns/cni-23cf423d-e72c-9e3e-f92d-357da125697d" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.462 [INFO][5678] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" iface="eth0" netns="/var/run/netns/cni-23cf423d-e72c-9e3e-f92d-357da125697d" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.463 [INFO][5678] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" iface="eth0" netns="/var/run/netns/cni-23cf423d-e72c-9e3e-f92d-357da125697d" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.463 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.463 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.485 [INFO][5685] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" HandleID="k8s-pod-network.ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.485 [INFO][5685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.485 [INFO][5685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.496 [WARNING][5685] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" HandleID="k8s-pod-network.ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.496 [INFO][5685] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" HandleID="k8s-pod-network.ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.497 [INFO][5685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:39.501514 containerd[1737]: 2026-01-20 01:42:39.499 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:39.501965 containerd[1737]: time="2026-01-20T01:42:39.501653796Z" level=info msg="TearDown network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\" successfully" Jan 20 01:42:39.501965 containerd[1737]: time="2026-01-20T01:42:39.501679556Z" level=info msg="StopPodSandbox for \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\" returns successfully" Jan 20 01:42:39.502524 containerd[1737]: time="2026-01-20T01:42:39.502497996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gxvd,Uid:44bdd32b-1d8e-4e5b-bb73-1e59535dcb96,Namespace:calico-system,Attempt:1,}" Jan 20 01:42:39.527662 systemd[1]: run-netns-cni\x2d23cf423d\x2de72c\x2d9e3e\x2df92d\x2d357da125697d.mount: Deactivated successfully. Jan 20 01:42:39.602014 systemd-networkd[1356]: vxlan.calico: Gained IPv6LL Jan 20 01:42:39.641267 systemd-networkd[1356]: cali8c79e50bd4a: Link UP Jan 20 01:42:39.642362 systemd-networkd[1356]: cali8c79e50bd4a: Gained carrier Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.569 [INFO][5691] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0 csi-node-driver- calico-system 44bdd32b-1d8e-4e5b-bb73-1e59535dcb96 1098 0 2026-01-20 01:42:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-e5d82fe73a csi-node-driver-4gxvd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8c79e50bd4a [] [] }} ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Namespace="calico-system" Pod="csi-node-driver-4gxvd" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.569 [INFO][5691] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Namespace="calico-system" Pod="csi-node-driver-4gxvd" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.591 [INFO][5703] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" HandleID="k8s-pod-network.f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.591 [INFO][5703] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" HandleID="k8s-pod-network.f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c8fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e5d82fe73a", "pod":"csi-node-driver-4gxvd", "timestamp":"2026-01-20 01:42:39.591702644 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e5d82fe73a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.591 [INFO][5703] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.591 [INFO][5703] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.591 [INFO][5703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e5d82fe73a' Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.601 [INFO][5703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.606 [INFO][5703] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.610 [INFO][5703] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.612 [INFO][5703] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.614 [INFO][5703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.615 [INFO][5703] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.616 [INFO][5703] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447 Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.622 [INFO][5703] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.632 [INFO][5703] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.9/26] block=192.168.69.0/26 handle="k8s-pod-network.f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.632 [INFO][5703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.9/26] handle="k8s-pod-network.f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" host="ci-4081.3.6-n-e5d82fe73a" Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.632 [INFO][5703] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:39.661208 containerd[1737]: 2026-01-20 01:42:39.632 [INFO][5703] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.9/26] IPv6=[] ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" HandleID="k8s-pod-network.f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.663983 containerd[1737]: 2026-01-20 01:42:39.635 [INFO][5691] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Namespace="calico-system" Pod="csi-node-driver-4gxvd" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"", Pod:"csi-node-driver-4gxvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c79e50bd4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:39.663983 containerd[1737]: 2026-01-20 01:42:39.636 [INFO][5691] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.9/32] ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Namespace="calico-system" Pod="csi-node-driver-4gxvd" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.663983 containerd[1737]: 2026-01-20 01:42:39.636 [INFO][5691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c79e50bd4a ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Namespace="calico-system" Pod="csi-node-driver-4gxvd" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.663983 containerd[1737]: 2026-01-20 01:42:39.643 [INFO][5691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Namespace="calico-system" Pod="csi-node-driver-4gxvd" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.663983 containerd[1737]: 2026-01-20 01:42:39.643 [INFO][5691] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Namespace="calico-system" Pod="csi-node-driver-4gxvd" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447", Pod:"csi-node-driver-4gxvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c79e50bd4a", MAC:"e2:e9:bb:8b:93:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:39.663983 containerd[1737]: 2026-01-20 01:42:39.656 [INFO][5691] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447" Namespace="calico-system" Pod="csi-node-driver-4gxvd" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:39.679820 kubelet[3217]: E0120 01:42:39.677072 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:42:39.680630 kubelet[3217]: E0120 01:42:39.680504 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:42:39.700742 containerd[1737]: time="2026-01-20T01:42:39.700011095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:39.700742 containerd[1737]: time="2026-01-20T01:42:39.700230375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:39.701047 containerd[1737]: time="2026-01-20T01:42:39.700281695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:39.701835 containerd[1737]: time="2026-01-20T01:42:39.701433535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:39.720112 kubelet[3217]: I0120 01:42:39.720053 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ddgxn" podStartSLOduration=46.720038897 podStartE2EDuration="46.720038897s" podCreationTimestamp="2026-01-20 01:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:39.695024295 +0000 UTC m=+53.392191096" watchObservedRunningTime="2026-01-20 01:42:39.720038897 +0000 UTC m=+53.417205698" Jan 20 01:42:39.730288 systemd-networkd[1356]: cali9baf8d6c635: Gained IPv6LL Jan 20 01:42:39.736954 systemd[1]: Started cri-containerd-f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447.scope - libcontainer container f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447. Jan 20 01:42:39.782017 containerd[1737]: time="2026-01-20T01:42:39.781700423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gxvd,Uid:44bdd32b-1d8e-4e5b-bb73-1e59535dcb96,Namespace:calico-system,Attempt:1,} returns sandbox id \"f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447\"" Jan 20 01:42:39.785640 containerd[1737]: time="2026-01-20T01:42:39.785608263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:42:40.042382 containerd[1737]: time="2026-01-20T01:42:40.042152328Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:40.044736 containerd[1737]: time="2026-01-20T01:42:40.044649569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:42:40.044736 containerd[1737]: time="2026-01-20T01:42:40.044717009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:42:40.045191 kubelet[3217]: E0120 01:42:40.044982 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:42:40.045191 kubelet[3217]: E0120 01:42:40.045037 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:42:40.045191 kubelet[3217]: E0120 01:42:40.045154 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:40.047204 containerd[1737]: time="2026-01-20T01:42:40.047144449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:42:40.416266 containerd[1737]: time="2026-01-20T01:42:40.416201845Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:40.421190 containerd[1737]: time="2026-01-20T01:42:40.420369285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:42:40.421190 containerd[1737]: time="2026-01-20T01:42:40.420759485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:42:40.421315 kubelet[3217]: E0120 01:42:40.421064 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:42:40.421315 kubelet[3217]: E0120 01:42:40.421097 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:42:40.423880 kubelet[3217]: E0120 01:42:40.423831 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:40.425171 kubelet[3217]: E0120 01:42:40.425127 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:40.681693 kubelet[3217]: E0120 01:42:40.681566 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:42:40.682347 kubelet[3217]: E0120 01:42:40.682287 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:40.688936 systemd-networkd[1356]: cali58367ec4bb5: Gained IPv6LL Jan 20 01:42:40.753945 systemd-networkd[1356]: califdfd89bb446: Gained IPv6LL Jan 20 01:42:40.881981 systemd-networkd[1356]: cali8c79e50bd4a: Gained IPv6LL Jan 20 01:42:41.683407 kubelet[3217]: E0120 01:42:41.683353 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:46.417829 containerd[1737]: time="2026-01-20T01:42:46.417674332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:42:46.419566 containerd[1737]: time="2026-01-20T01:42:46.419527533Z" level=info msg="StopPodSandbox for \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\"" Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.463 [WARNING][5787] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0", GenerateName:"calico-kube-controllers-59886bc69c-", Namespace:"calico-system", SelfLink:"", UID:"a793b124-6073-4604-9c24-ad5326cb3836", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59886bc69c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2", Pod:"calico-kube-controllers-59886bc69c-2p6tc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califdfd89bb446", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.464 [INFO][5787] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.464 [INFO][5787] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" iface="eth0" netns="" Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.464 [INFO][5787] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.464 [INFO][5787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.484 [INFO][5794] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" HandleID="k8s-pod-network.b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.484 [INFO][5794] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.484 [INFO][5794] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.496 [WARNING][5794] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" HandleID="k8s-pod-network.b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.496 [INFO][5794] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" HandleID="k8s-pod-network.b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.497 [INFO][5794] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.501519 containerd[1737]: 2026-01-20 01:42:46.499 [INFO][5787] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:46.502224 containerd[1737]: time="2026-01-20T01:42:46.501555959Z" level=info msg="TearDown network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\" successfully" Jan 20 01:42:46.502224 containerd[1737]: time="2026-01-20T01:42:46.501578199Z" level=info msg="StopPodSandbox for \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\" returns successfully" Jan 20 01:42:46.502700 containerd[1737]: time="2026-01-20T01:42:46.502447319Z" level=info msg="RemovePodSandbox for \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\"" Jan 20 01:42:46.502700 containerd[1737]: time="2026-01-20T01:42:46.502475719Z" level=info msg="Forcibly stopping sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\"" Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.536 [WARNING][5809] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0", GenerateName:"calico-kube-controllers-59886bc69c-", Namespace:"calico-system", SelfLink:"", UID:"a793b124-6073-4604-9c24-ad5326cb3836", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59886bc69c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"d75de3b9bef1f216dfa219439bd294bb6d698c3ea2f88abb0d9c3c3632af3fe2", Pod:"calico-kube-controllers-59886bc69c-2p6tc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califdfd89bb446", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.537 [INFO][5809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.537 [INFO][5809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" iface="eth0" netns="" Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.537 [INFO][5809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.537 [INFO][5809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.559 [INFO][5816] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" HandleID="k8s-pod-network.b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.559 [INFO][5816] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.559 [INFO][5816] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.569 [WARNING][5816] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" HandleID="k8s-pod-network.b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.569 [INFO][5816] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" HandleID="k8s-pod-network.b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--kube--controllers--59886bc69c--2p6tc-eth0" Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.570 [INFO][5816] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.574088 containerd[1737]: 2026-01-20 01:42:46.572 [INFO][5809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6" Jan 20 01:42:46.575180 containerd[1737]: time="2026-01-20T01:42:46.574549222Z" level=info msg="TearDown network for sandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\" successfully" Jan 20 01:42:46.583177 containerd[1737]: time="2026-01-20T01:42:46.583144705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:42:46.583294 containerd[1737]: time="2026-01-20T01:42:46.583279825Z" level=info msg="RemovePodSandbox \"b6a9beeb24aa4983cdc9769f6e4629a1ed7fc373c9cea2894057464251c5f5d6\" returns successfully" Jan 20 01:42:46.583909 containerd[1737]: time="2026-01-20T01:42:46.583889065Z" level=info msg="StopPodSandbox for \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\"" Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.622 [WARNING][5830] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0", GenerateName:"calico-apiserver-588969c7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c28c5ae-f540-4875-a2fd-481f9d148cbd", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588969c7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf", Pod:"calico-apiserver-588969c7f9-g5sn6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9baf8d6c635", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.623 [INFO][5830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.623 [INFO][5830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" iface="eth0" netns="" Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.623 [INFO][5830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.623 [INFO][5830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.643 [INFO][5837] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" HandleID="k8s-pod-network.a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.643 [INFO][5837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.643 [INFO][5837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.652 [WARNING][5837] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" HandleID="k8s-pod-network.a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.652 [INFO][5837] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" HandleID="k8s-pod-network.a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.654 [INFO][5837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.662350 containerd[1737]: 2026-01-20 01:42:46.659 [INFO][5830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:46.663504 containerd[1737]: time="2026-01-20T01:42:46.662380930Z" level=info msg="TearDown network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\" successfully" Jan 20 01:42:46.663504 containerd[1737]: time="2026-01-20T01:42:46.662403570Z" level=info msg="StopPodSandbox for \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\" returns successfully" Jan 20 01:42:46.663504 containerd[1737]: time="2026-01-20T01:42:46.662999650Z" level=info msg="RemovePodSandbox for \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\"" Jan 20 01:42:46.663504 containerd[1737]: time="2026-01-20T01:42:46.663027330Z" level=info msg="Forcibly stopping sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\"" Jan 20 01:42:46.687803 containerd[1737]: time="2026-01-20T01:42:46.687355258Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:46.692231 containerd[1737]: time="2026-01-20T01:42:46.690465379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:42:46.692231 containerd[1737]: time="2026-01-20T01:42:46.690555779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:42:46.692719 kubelet[3217]: E0120 01:42:46.690952 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:42:46.692719 kubelet[3217]: E0120 01:42:46.690989 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:42:46.692719 kubelet[3217]: E0120 01:42:46.691083 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8f17a3a521ca43d5a97871bb0e325b25,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f58966dbf-54hk5_calico-system(a7a9b064-5e91-49bb-b0db-fcf6fce9b0be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:46.696909 containerd[1737]: time="2026-01-20T01:42:46.696497781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.718 [WARNING][5851] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0", GenerateName:"calico-apiserver-588969c7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c28c5ae-f540-4875-a2fd-481f9d148cbd", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588969c7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"5f9904592fc526e34b8b42fc6e570ede4fe3f3da618dc4d2722a3b3f3be2d0bf", Pod:"calico-apiserver-588969c7f9-g5sn6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9baf8d6c635", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.719 [INFO][5851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.719 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" iface="eth0" netns="" Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.719 [INFO][5851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.719 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.738 [INFO][5861] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" HandleID="k8s-pod-network.a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.738 [INFO][5861] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.738 [INFO][5861] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.747 [WARNING][5861] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" HandleID="k8s-pod-network.a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.747 [INFO][5861] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" HandleID="k8s-pod-network.a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--g5sn6-eth0" Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.748 [INFO][5861] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.752503 containerd[1737]: 2026-01-20 01:42:46.750 [INFO][5851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be" Jan 20 01:42:46.753176 containerd[1737]: time="2026-01-20T01:42:46.752598238Z" level=info msg="TearDown network for sandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\" successfully" Jan 20 01:42:46.759430 containerd[1737]: time="2026-01-20T01:42:46.759307721Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:42:46.759430 containerd[1737]: time="2026-01-20T01:42:46.759378161Z" level=info msg="RemovePodSandbox \"a3ec01c1ea255b8e40f64b49da94c65746dd5f097726b3b4289b19823f1d34be\" returns successfully" Jan 20 01:42:46.760188 containerd[1737]: time="2026-01-20T01:42:46.759934041Z" level=info msg="StopPodSandbox for \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\"" Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.795 [WARNING][5875] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"68f5545e-7661-40cf-baeb-c5c30a862135", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b", Pod:"goldmane-666569f655-qkv75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calice84e2cd3b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.796 [INFO][5875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.796 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" iface="eth0" netns="" Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.796 [INFO][5875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.796 [INFO][5875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.816 [INFO][5882] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" HandleID="k8s-pod-network.53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.816 [INFO][5882] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.816 [INFO][5882] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.826 [WARNING][5882] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" HandleID="k8s-pod-network.53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.826 [INFO][5882] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" HandleID="k8s-pod-network.53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.827 [INFO][5882] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.831128 containerd[1737]: 2026-01-20 01:42:46.829 [INFO][5875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:46.832362 containerd[1737]: time="2026-01-20T01:42:46.831567103Z" level=info msg="TearDown network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\" successfully" Jan 20 01:42:46.832362 containerd[1737]: time="2026-01-20T01:42:46.831595543Z" level=info msg="StopPodSandbox for \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\" returns successfully" Jan 20 01:42:46.832362 containerd[1737]: time="2026-01-20T01:42:46.832090904Z" level=info msg="RemovePodSandbox for \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\"" Jan 20 01:42:46.832362 containerd[1737]: time="2026-01-20T01:42:46.832117464Z" level=info msg="Forcibly stopping sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\"" Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.870 [WARNING][5897] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"68f5545e-7661-40cf-baeb-c5c30a862135", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"c6a47851078f7a6678b072f8ca1e1be14d0bad390de3091a8cb1162df3cc420b", Pod:"goldmane-666569f655-qkv75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calice84e2cd3b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.871 [INFO][5897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.871 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" iface="eth0" netns="" Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.871 [INFO][5897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.871 [INFO][5897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.889 [INFO][5904] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" HandleID="k8s-pod-network.53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.889 [INFO][5904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.889 [INFO][5904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.899 [WARNING][5904] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" HandleID="k8s-pod-network.53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.899 [INFO][5904] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" HandleID="k8s-pod-network.53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-goldmane--666569f655--qkv75-eth0" Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.900 [INFO][5904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.904193 containerd[1737]: 2026-01-20 01:42:46.902 [INFO][5897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2" Jan 20 01:42:46.904992 containerd[1737]: time="2026-01-20T01:42:46.904636567Z" level=info msg="TearDown network for sandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\" successfully" Jan 20 01:42:46.910272 containerd[1737]: time="2026-01-20T01:42:46.910136488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:42:46.910272 containerd[1737]: time="2026-01-20T01:42:46.910192448Z" level=info msg="RemovePodSandbox \"53ac560486fb95f2cf0ebbef477d92a2517e33288a8b9519e6947054b94ad4f2\" returns successfully" Jan 20 01:42:46.910737 containerd[1737]: time="2026-01-20T01:42:46.910707208Z" level=info msg="StopPodSandbox for \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\"" Jan 20 01:42:46.954262 containerd[1737]: time="2026-01-20T01:42:46.954146742Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:46.957514 containerd[1737]: time="2026-01-20T01:42:46.957248783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:42:46.957514 containerd[1737]: time="2026-01-20T01:42:46.957353303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:42:46.958184 kubelet[3217]: E0120 01:42:46.958065 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:42:46.958184 kubelet[3217]: E0120 01:42:46.958118 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:42:46.958298 kubelet[3217]: E0120 01:42:46.958243 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f58966dbf-54hk5_calico-system(a7a9b064-5e91-49bb-b0db-fcf6fce9b0be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:46.960187 kubelet[3217]: E0120 01:42:46.959702 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.944 [WARNING][5918] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0", GenerateName:"calico-apiserver-7dd6fbd444-", Namespace:"calico-apiserver", SelfLink:"", UID:"97a804d9-65a3-4df8-a009-6289887849fb", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dd6fbd444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd", Pod:"calico-apiserver-7dd6fbd444-88ccv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1c67e0426d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.945 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.945 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" iface="eth0" netns="" Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.945 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.945 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.978 [INFO][5925] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" HandleID="k8s-pod-network.b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.979 [INFO][5925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.979 [INFO][5925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.987 [WARNING][5925] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" HandleID="k8s-pod-network.b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.987 [INFO][5925] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" HandleID="k8s-pod-network.b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.989 [INFO][5925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.993961 containerd[1737]: 2026-01-20 01:42:46.991 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:46.995064 containerd[1737]: time="2026-01-20T01:42:46.994923834Z" level=info msg="TearDown network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\" successfully" Jan 20 01:42:46.995064 containerd[1737]: time="2026-01-20T01:42:46.994966834Z" level=info msg="StopPodSandbox for \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\" returns successfully" Jan 20 01:42:46.995587 containerd[1737]: time="2026-01-20T01:42:46.995561714Z" level=info msg="RemovePodSandbox for \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\"" Jan 20 01:42:46.995648 containerd[1737]: time="2026-01-20T01:42:46.995589794Z" level=info msg="Forcibly stopping sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\"" Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.036 [WARNING][5939] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0", GenerateName:"calico-apiserver-7dd6fbd444-", Namespace:"calico-apiserver", SelfLink:"", UID:"97a804d9-65a3-4df8-a009-6289887849fb", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dd6fbd444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"821c4604b41ceedd73fd95171e89197def36521c78c7d7db2f9c4da497d607fd", Pod:"calico-apiserver-7dd6fbd444-88ccv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1c67e0426d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.036 [INFO][5939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.036 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" iface="eth0" netns="" Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.036 [INFO][5939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.036 [INFO][5939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.056 [INFO][5946] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" HandleID="k8s-pod-network.b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.057 [INFO][5946] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.057 [INFO][5946] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.065 [WARNING][5946] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" HandleID="k8s-pod-network.b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.065 [INFO][5946] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" HandleID="k8s-pod-network.b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--7dd6fbd444--88ccv-eth0" Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.067 [INFO][5946] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.071083 containerd[1737]: 2026-01-20 01:42:47.069 [INFO][5939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830" Jan 20 01:42:47.071479 containerd[1737]: time="2026-01-20T01:42:47.071130177Z" level=info msg="TearDown network for sandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\" successfully" Jan 20 01:42:47.078138 containerd[1737]: time="2026-01-20T01:42:47.078088660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:42:47.078305 containerd[1737]: time="2026-01-20T01:42:47.078154780Z" level=info msg="RemovePodSandbox \"b12f3eaa4901631d4878226273da386042a2864d935539a3ad899b9f599bc830\" returns successfully" Jan 20 01:42:47.079116 containerd[1737]: time="2026-01-20T01:42:47.078850380Z" level=info msg="StopPodSandbox for \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\"" Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.115 [WARNING][5960] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447", Pod:"csi-node-driver-4gxvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c79e50bd4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.115 [INFO][5960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.115 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" iface="eth0" netns="" Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.115 [INFO][5960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.115 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.140 [INFO][5967] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" HandleID="k8s-pod-network.ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.140 [INFO][5967] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.140 [INFO][5967] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.151 [WARNING][5967] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" HandleID="k8s-pod-network.ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.151 [INFO][5967] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" HandleID="k8s-pod-network.ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.153 [INFO][5967] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.157523 containerd[1737]: 2026-01-20 01:42:47.155 [INFO][5960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:47.157933 containerd[1737]: time="2026-01-20T01:42:47.157599524Z" level=info msg="TearDown network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\" successfully" Jan 20 01:42:47.157933 containerd[1737]: time="2026-01-20T01:42:47.157624804Z" level=info msg="StopPodSandbox for \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\" returns successfully" Jan 20 01:42:47.158662 containerd[1737]: time="2026-01-20T01:42:47.158342964Z" level=info msg="RemovePodSandbox for \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\"" Jan 20 01:42:47.158662 containerd[1737]: time="2026-01-20T01:42:47.158374164Z" level=info msg="Forcibly stopping sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\"" Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.195 [WARNING][5981] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44bdd32b-1d8e-4e5b-bb73-1e59535dcb96", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"f72ffd92c67d319a9d22a3834661fafe3603b1800ec9abc45bda96ac5436f447", Pod:"csi-node-driver-4gxvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c79e50bd4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.195 [INFO][5981] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.195 [INFO][5981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" iface="eth0" netns="" Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.195 [INFO][5981] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.195 [INFO][5981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.214 [INFO][5988] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" HandleID="k8s-pod-network.ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.214 [INFO][5988] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.214 [INFO][5988] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.223 [WARNING][5988] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" HandleID="k8s-pod-network.ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.223 [INFO][5988] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" HandleID="k8s-pod-network.ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-csi--node--driver--4gxvd-eth0" Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.225 [INFO][5988] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.229953 containerd[1737]: 2026-01-20 01:42:47.227 [INFO][5981] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295" Jan 20 01:42:47.229953 containerd[1737]: time="2026-01-20T01:42:47.229006226Z" level=info msg="TearDown network for sandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\" successfully" Jan 20 01:42:47.241685 containerd[1737]: time="2026-01-20T01:42:47.241643670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:42:47.241861 containerd[1737]: time="2026-01-20T01:42:47.241844950Z" level=info msg="RemovePodSandbox \"ffd45a6417f2a9c46b062347bdd9f5865588991efe2616256d7a4139083cf295\" returns successfully" Jan 20 01:42:47.242493 containerd[1737]: time="2026-01-20T01:42:47.242437750Z" level=info msg="StopPodSandbox for \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\"" Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.280 [WARNING][6002] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0", GenerateName:"calico-apiserver-588969c7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d821eeb9-a64e-4dc2-bbef-b0976a3bf49a", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588969c7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f", Pod:"calico-apiserver-588969c7f9-dsqq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13f65b2e1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.280 [INFO][6002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.280 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" iface="eth0" netns="" Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.280 [INFO][6002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.280 [INFO][6002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.301 [INFO][6009] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" HandleID="k8s-pod-network.48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.301 [INFO][6009] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.301 [INFO][6009] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.310 [WARNING][6009] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" HandleID="k8s-pod-network.48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.310 [INFO][6009] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" HandleID="k8s-pod-network.48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.312 [INFO][6009] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.316951 containerd[1737]: 2026-01-20 01:42:47.314 [INFO][6002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:47.317643 containerd[1737]: time="2026-01-20T01:42:47.317385773Z" level=info msg="TearDown network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\" successfully" Jan 20 01:42:47.317643 containerd[1737]: time="2026-01-20T01:42:47.317417173Z" level=info msg="StopPodSandbox for \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\" returns successfully" Jan 20 01:42:47.318309 containerd[1737]: time="2026-01-20T01:42:47.318027453Z" level=info msg="RemovePodSandbox for \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\"" Jan 20 01:42:47.318309 containerd[1737]: time="2026-01-20T01:42:47.318057613Z" level=info msg="Forcibly stopping sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\"" Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.357 [WARNING][6023] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0", GenerateName:"calico-apiserver-588969c7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d821eeb9-a64e-4dc2-bbef-b0976a3bf49a", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"588969c7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"cbf535a4c46ba90be7e88b0e2c95838f6ecf0179fe6c5841210abdb8f8f1e17f", Pod:"calico-apiserver-588969c7f9-dsqq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13f65b2e1f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.357 [INFO][6023] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.357 [INFO][6023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" iface="eth0" netns="" Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.357 [INFO][6023] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.357 [INFO][6023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.375 [INFO][6030] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" HandleID="k8s-pod-network.48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.376 [INFO][6030] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.376 [INFO][6030] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.385 [WARNING][6030] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" HandleID="k8s-pod-network.48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.385 [INFO][6030] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" HandleID="k8s-pod-network.48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-calico--apiserver--588969c7f9--dsqq7-eth0" Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.388 [INFO][6030] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.395214 containerd[1737]: 2026-01-20 01:42:47.392 [INFO][6023] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772" Jan 20 01:42:47.395214 containerd[1737]: time="2026-01-20T01:42:47.395135357Z" level=info msg="TearDown network for sandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\" successfully" Jan 20 01:42:47.404109 containerd[1737]: time="2026-01-20T01:42:47.404064039Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:42:47.404377 containerd[1737]: time="2026-01-20T01:42:47.404130119Z" level=info msg="RemovePodSandbox \"48e743a8ce42aeb6a907f7ec2ef4d3b84aca5c26471514ff8d092230a07c5772\" returns successfully" Jan 20 01:42:47.404998 containerd[1737]: time="2026-01-20T01:42:47.404733640Z" level=info msg="StopPodSandbox for \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\"" Jan 20 01:42:47.417910 containerd[1737]: time="2026-01-20T01:42:47.417769164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.445 [WARNING][6045] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"138649de-b257-4de2-b470-3f54b1f24475", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25", Pod:"coredns-674b8bbfcf-ddgxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58367ec4bb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.445 [INFO][6045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.445 [INFO][6045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" iface="eth0" netns="" Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.445 [INFO][6045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.445 [INFO][6045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.463 [INFO][6052] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" HandleID="k8s-pod-network.2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.464 [INFO][6052] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.464 [INFO][6052] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.472 [WARNING][6052] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" HandleID="k8s-pod-network.2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.472 [INFO][6052] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" HandleID="k8s-pod-network.2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.474 [INFO][6052] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.477716 containerd[1737]: 2026-01-20 01:42:47.475 [INFO][6045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:47.478845 containerd[1737]: time="2026-01-20T01:42:47.477749142Z" level=info msg="TearDown network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\" successfully" Jan 20 01:42:47.478845 containerd[1737]: time="2026-01-20T01:42:47.477774382Z" level=info msg="StopPodSandbox for \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\" returns successfully" Jan 20 01:42:47.478845 containerd[1737]: time="2026-01-20T01:42:47.478242622Z" level=info msg="RemovePodSandbox for \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\"" Jan 20 01:42:47.478845 containerd[1737]: time="2026-01-20T01:42:47.478270102Z" level=info msg="Forcibly stopping sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\"" Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.512 [WARNING][6066] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"138649de-b257-4de2-b470-3f54b1f24475", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"a88615eddfd991777bf60bf98f4de65cc72a34010723be2fb5329a39ef209a25", Pod:"coredns-674b8bbfcf-ddgxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58367ec4bb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.513 [INFO][6066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.513 [INFO][6066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" iface="eth0" netns="" Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.513 [INFO][6066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.513 [INFO][6066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.535 [INFO][6073] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" HandleID="k8s-pod-network.2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.535 [INFO][6073] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.535 [INFO][6073] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.544 [WARNING][6073] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" HandleID="k8s-pod-network.2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.544 [INFO][6073] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" HandleID="k8s-pod-network.2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--ddgxn-eth0" Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.545 [INFO][6073] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.550856 containerd[1737]: 2026-01-20 01:42:47.547 [INFO][6066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4" Jan 20 01:42:47.550856 containerd[1737]: time="2026-01-20T01:42:47.549540924Z" level=info msg="TearDown network for sandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\" successfully" Jan 20 01:42:47.557130 containerd[1737]: time="2026-01-20T01:42:47.556754526Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:42:47.557130 containerd[1737]: time="2026-01-20T01:42:47.556816646Z" level=info msg="RemovePodSandbox \"2cd61dceab5bd543b5ea2493035f30c185636b3ee0f3f224eeec905b2f01d2d4\" returns successfully" Jan 20 01:42:47.557328 containerd[1737]: time="2026-01-20T01:42:47.557254766Z" level=info msg="StopPodSandbox for \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\"" Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.599 [WARNING][6087] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.599 [INFO][6087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.599 [INFO][6087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" iface="eth0" netns="" Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.599 [INFO][6087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.599 [INFO][6087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.617 [INFO][6094] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" HandleID="k8s-pod-network.78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.617 [INFO][6094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.617 [INFO][6094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.626 [WARNING][6094] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" HandleID="k8s-pod-network.78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.626 [INFO][6094] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" HandleID="k8s-pod-network.78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.628 [INFO][6094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.631790 containerd[1737]: 2026-01-20 01:42:47.630 [INFO][6087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:47.632300 containerd[1737]: time="2026-01-20T01:42:47.631832069Z" level=info msg="TearDown network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\" successfully" Jan 20 01:42:47.632300 containerd[1737]: time="2026-01-20T01:42:47.631855709Z" level=info msg="StopPodSandbox for \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\" returns successfully" Jan 20 01:42:47.632377 containerd[1737]: time="2026-01-20T01:42:47.632349829Z" level=info msg="RemovePodSandbox for \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\"" Jan 20 01:42:47.632407 containerd[1737]: time="2026-01-20T01:42:47.632377949Z" level=info msg="Forcibly stopping sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\"" Jan 20 01:42:47.667931 containerd[1737]: time="2026-01-20T01:42:47.667886680Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:47.670305 containerd[1737]: time="2026-01-20T01:42:47.670263001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:42:47.670380 containerd[1737]: time="2026-01-20T01:42:47.670363201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:47.671175 kubelet[3217]: E0120 01:42:47.670490 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:42:47.671175 kubelet[3217]: E0120 01:42:47.670538 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:42:47.671175 kubelet[3217]: E0120 01:42:47.670679 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fqhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qkv75_calico-system(68f5545e-7661-40cf-baeb-c5c30a862135): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:47.672234 kubelet[3217]: E0120 01:42:47.672112 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.664 [WARNING][6108] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" WorkloadEndpoint="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.664 [INFO][6108] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.664 [INFO][6108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" iface="eth0" netns="" Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.664 [INFO][6108] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.664 [INFO][6108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.686 [INFO][6115] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" HandleID="k8s-pod-network.78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.686 [INFO][6115] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.686 [INFO][6115] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.700 [WARNING][6115] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" HandleID="k8s-pod-network.78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.701 [INFO][6115] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" HandleID="k8s-pod-network.78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-whisker--d78647bdf--6kjdc-eth0" Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.704 [INFO][6115] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.710544 containerd[1737]: 2026-01-20 01:42:47.708 [INFO][6108] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca" Jan 20 01:42:47.710955 containerd[1737]: time="2026-01-20T01:42:47.710570773Z" level=info msg="TearDown network for sandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\" successfully" Jan 20 01:42:47.720506 containerd[1737]: time="2026-01-20T01:42:47.719241816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:42:47.720506 containerd[1737]: time="2026-01-20T01:42:47.719322296Z" level=info msg="RemovePodSandbox \"78998a98cdccd4b98e511287440cdb88057b66674a2eb5038f928f0f66d083ca\" returns successfully" Jan 20 01:42:47.721048 containerd[1737]: time="2026-01-20T01:42:47.721025376Z" level=info msg="StopPodSandbox for \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\"" Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.796 [WARNING][6129] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8eac5578-1c74-4107-a02f-d780338d63d7", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b", Pod:"coredns-674b8bbfcf-m78tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali746d596c0e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.796 [INFO][6129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.796 [INFO][6129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" iface="eth0" netns="" Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.796 [INFO][6129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.796 [INFO][6129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.816 [INFO][6136] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" HandleID="k8s-pod-network.0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.816 [INFO][6136] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.816 [INFO][6136] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.825 [WARNING][6136] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" HandleID="k8s-pod-network.0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.825 [INFO][6136] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" HandleID="k8s-pod-network.0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.827 [INFO][6136] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.830775 containerd[1737]: 2026-01-20 01:42:47.828 [INFO][6129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:47.831189 containerd[1737]: time="2026-01-20T01:42:47.830779770Z" level=info msg="TearDown network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\" successfully" Jan 20 01:42:47.831189 containerd[1737]: time="2026-01-20T01:42:47.830822570Z" level=info msg="StopPodSandbox for \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\" returns successfully" Jan 20 01:42:47.831836 containerd[1737]: time="2026-01-20T01:42:47.831811570Z" level=info msg="RemovePodSandbox for \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\"" Jan 20 01:42:47.831903 containerd[1737]: time="2026-01-20T01:42:47.831843930Z" level=info msg="Forcibly stopping sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\"" Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.864 [WARNING][6150] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8eac5578-1c74-4107-a02f-d780338d63d7", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e5d82fe73a", ContainerID:"340889f793e99e9613717822368cc8bfdffc4e9e02bc3e9b8f0b5ea618cd9a8b", Pod:"coredns-674b8bbfcf-m78tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali746d596c0e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.864 [INFO][6150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.864 [INFO][6150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" iface="eth0" netns="" Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.864 [INFO][6150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.864 [INFO][6150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.881 [INFO][6157] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" HandleID="k8s-pod-network.0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.881 [INFO][6157] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.881 [INFO][6157] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.890 [WARNING][6157] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" HandleID="k8s-pod-network.0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.890 [INFO][6157] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" HandleID="k8s-pod-network.0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Workload="ci--4081.3.6--n--e5d82fe73a-k8s-coredns--674b8bbfcf--m78tf-eth0" Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.892 [INFO][6157] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.895986 containerd[1737]: 2026-01-20 01:42:47.893 [INFO][6150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e" Jan 20 01:42:47.896388 containerd[1737]: time="2026-01-20T01:42:47.896038470Z" level=info msg="TearDown network for sandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\" successfully" Jan 20 01:42:47.902314 containerd[1737]: time="2026-01-20T01:42:47.902274472Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:42:47.902379 containerd[1737]: time="2026-01-20T01:42:47.902339512Z" level=info msg="RemovePodSandbox \"0c9b945667a44b6c0cb4be4975b73d8c0c3d49116e26a41d711cddf9b86fd34e\" returns successfully" Jan 20 01:42:49.417908 containerd[1737]: time="2026-01-20T01:42:49.416861975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:42:49.680492 containerd[1737]: time="2026-01-20T01:42:49.680327176Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:49.683653 containerd[1737]: time="2026-01-20T01:42:49.683615537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:42:49.683836 containerd[1737]: time="2026-01-20T01:42:49.683639937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:49.683875 kubelet[3217]: E0120 01:42:49.683811 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:49.683875 kubelet[3217]: E0120 01:42:49.683851 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:49.684140 kubelet[3217]: E0120 01:42:49.683970 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgvkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-588969c7f9-dsqq7_calico-apiserver(d821eeb9-a64e-4dc2-bbef-b0976a3bf49a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:49.685434 kubelet[3217]: E0120 01:42:49.685353 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:42:51.416577 containerd[1737]: time="2026-01-20T01:42:51.416529868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:42:51.661232 containerd[1737]: time="2026-01-20T01:42:51.661034742Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:51.665472 containerd[1737]: time="2026-01-20T01:42:51.665429184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:42:51.665550 containerd[1737]: time="2026-01-20T01:42:51.665531824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:51.665952 kubelet[3217]: E0120 01:42:51.665697 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:51.665952 kubelet[3217]: E0120 01:42:51.665746 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:51.665952 kubelet[3217]: E0120 01:42:51.665892 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xth98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dd6fbd444-88ccv_calico-apiserver(97a804d9-65a3-4df8-a009-6289887849fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:51.667439 kubelet[3217]: E0120 01:42:51.667338 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:42:53.417216 containerd[1737]: time="2026-01-20T01:42:53.417175200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:42:53.647613 containerd[1737]: time="2026-01-20T01:42:53.647567070Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:53.650010 containerd[1737]: time="2026-01-20T01:42:53.649972831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:42:53.650094 containerd[1737]: time="2026-01-20T01:42:53.650063311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:42:53.650430 kubelet[3217]: E0120 01:42:53.650195 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:42:53.650430 kubelet[3217]: E0120 01:42:53.650247 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:42:53.650430 kubelet[3217]: E0120 01:42:53.650374 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhlh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59886bc69c-2p6tc_calico-system(a793b124-6073-4604-9c24-ad5326cb3836): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:53.651820 kubelet[3217]: E0120 01:42:53.651778 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:42:55.416657 containerd[1737]: time="2026-01-20T01:42:55.416428873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:42:55.673188 containerd[1737]: time="2026-01-20T01:42:55.673048942Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:55.676139 containerd[1737]: time="2026-01-20T01:42:55.676094823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:42:55.676239 containerd[1737]: time="2026-01-20T01:42:55.676191423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:55.676768 kubelet[3217]: E0120 01:42:55.676339 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:55.676768 kubelet[3217]: E0120 01:42:55.676387 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:55.676768 kubelet[3217]: E0120 01:42:55.676523 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddcnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-588969c7f9-g5sn6_calico-apiserver(8c28c5ae-f540-4875-a2fd-481f9d148cbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:55.678154 kubelet[3217]: E0120 01:42:55.678103 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:42:56.417595 containerd[1737]: time="2026-01-20T01:42:56.417500342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:42:56.649822 containerd[1737]: time="2026-01-20T01:42:56.649572564Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:56.652490 containerd[1737]: time="2026-01-20T01:42:56.652374925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:42:56.652490 containerd[1737]: time="2026-01-20T01:42:56.652440885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:42:56.652744 kubelet[3217]: E0120 01:42:56.652706 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:42:56.652828 kubelet[3217]: E0120 01:42:56.652753 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:42:56.653235 kubelet[3217]: E0120 01:42:56.652884 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:56.655016 containerd[1737]: time="2026-01-20T01:42:56.654984446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:42:56.912996 containerd[1737]: time="2026-01-20T01:42:56.912952275Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:56.915475 containerd[1737]: time="2026-01-20T01:42:56.915437956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:42:56.915576 containerd[1737]: time="2026-01-20T01:42:56.915549196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:42:56.915762 kubelet[3217]: E0120 01:42:56.915724 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:42:56.916060 kubelet[3217]: E0120 01:42:56.915774 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:42:56.916060 kubelet[3217]: E0120 01:42:56.915935 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:56.917415 kubelet[3217]: E0120 01:42:56.917376 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:42:57.419812 kubelet[3217]: E0120 01:42:57.417867 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:43:01.416971 kubelet[3217]: E0120 01:43:01.416924 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:43:02.417121 kubelet[3217]: E0120 01:43:02.417012 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:43:03.417528 kubelet[3217]: E0120 01:43:03.417213 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:43:05.417039 kubelet[3217]: E0120 01:43:05.416619 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:43:11.417402 kubelet[3217]: E0120 01:43:11.416968 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:43:11.419885 kubelet[3217]: E0120 01:43:11.419801 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:43:12.418702 containerd[1737]: time="2026-01-20T01:43:12.418629804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:12.687438 containerd[1737]: time="2026-01-20T01:43:12.687026325Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:12.691231 containerd[1737]: time="2026-01-20T01:43:12.691135126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:12.693795 containerd[1737]: time="2026-01-20T01:43:12.691740886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:12.693897 kubelet[3217]: E0120 01:43:12.692458 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:12.693897 kubelet[3217]: E0120 01:43:12.692505 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:12.693897 kubelet[3217]: E0120 01:43:12.692760 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgvkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-588969c7f9-dsqq7_calico-apiserver(d821eeb9-a64e-4dc2-bbef-b0976a3bf49a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:12.696614 kubelet[3217]: E0120 01:43:12.696050 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:43:12.699021 containerd[1737]: time="2026-01-20T01:43:12.698946448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:43:12.955323 containerd[1737]: time="2026-01-20T01:43:12.955102845Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:12.957731 containerd[1737]: time="2026-01-20T01:43:12.957634725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:43:12.957731 containerd[1737]: time="2026-01-20T01:43:12.957704525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:43:12.957904 kubelet[3217]: E0120 01:43:12.957822 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:43:12.957904 kubelet[3217]: E0120 01:43:12.957863 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:43:12.958004 kubelet[3217]: E0120 01:43:12.957967 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8f17a3a521ca43d5a97871bb0e325b25,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f58966dbf-54hk5_calico-system(a7a9b064-5e91-49bb-b0db-fcf6fce9b0be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:12.960969 containerd[1737]: time="2026-01-20T01:43:12.960840526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:43:13.210990 containerd[1737]: time="2026-01-20T01:43:13.210849161Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:13.213838 containerd[1737]: time="2026-01-20T01:43:13.213714242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:43:13.213838 containerd[1737]: time="2026-01-20T01:43:13.213801682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:43:13.213979 kubelet[3217]: E0120 01:43:13.213946 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:43:13.214018 kubelet[3217]: E0120 01:43:13.213990 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:43:13.214147 kubelet[3217]: E0120 01:43:13.214100 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f58966dbf-54hk5_calico-system(a7a9b064-5e91-49bb-b0db-fcf6fce9b0be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:13.215658 kubelet[3217]: E0120 01:43:13.215572 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:43:13.417409 containerd[1737]: time="2026-01-20T01:43:13.417368302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:43:13.819462 containerd[1737]: time="2026-01-20T01:43:13.819348542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:13.823869 containerd[1737]: time="2026-01-20T01:43:13.823582544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:43:13.823869 containerd[1737]: time="2026-01-20T01:43:13.823654864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:13.824013 kubelet[3217]: E0120 01:43:13.823815 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:13.824498 kubelet[3217]: E0120 01:43:13.824264 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:13.824498 kubelet[3217]: E0120 01:43:13.824431 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fqhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qkv75_calico-system(68f5545e-7661-40cf-baeb-c5c30a862135): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:13.825875 kubelet[3217]: E0120 01:43:13.825821 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:43:14.417574 containerd[1737]: time="2026-01-20T01:43:14.417536161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:14.676619 containerd[1737]: time="2026-01-20T01:43:14.676384438Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:14.678861 containerd[1737]: time="2026-01-20T01:43:14.678719719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:14.678861 containerd[1737]: time="2026-01-20T01:43:14.678790079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:14.679349 kubelet[3217]: E0120 01:43:14.679130 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:14.679349 kubelet[3217]: E0120 01:43:14.679174 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:14.679550 kubelet[3217]: E0120 01:43:14.679433 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xth98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dd6fbd444-88ccv_calico-apiserver(97a804d9-65a3-4df8-a009-6289887849fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:14.680674 kubelet[3217]: E0120 01:43:14.680626 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:43:18.418175 containerd[1737]: time="2026-01-20T01:43:18.418086435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:43:18.684837 containerd[1737]: time="2026-01-20T01:43:18.684429434Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:18.687950 containerd[1737]: time="2026-01-20T01:43:18.687850635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:43:18.687950 containerd[1737]: time="2026-01-20T01:43:18.687914355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:43:18.688769 kubelet[3217]: E0120 01:43:18.688161 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:18.688769 kubelet[3217]: E0120 01:43:18.688210 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:18.688769 kubelet[3217]: E0120 01:43:18.688361 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhlh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59886bc69c-2p6tc_calico-system(a793b124-6073-4604-9c24-ad5326cb3836): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:18.689914 kubelet[3217]: E0120 01:43:18.689885 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:43:23.417496 containerd[1737]: time="2026-01-20T01:43:23.417448133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:43:23.696205 containerd[1737]: time="2026-01-20T01:43:23.695740311Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:23.698825 containerd[1737]: time="2026-01-20T01:43:23.698722392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:43:23.698825 containerd[1737]: time="2026-01-20T01:43:23.698769992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:43:23.698963 kubelet[3217]: E0120 01:43:23.698925 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:23.699244 kubelet[3217]: E0120 01:43:23.698975 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:23.699244 kubelet[3217]: E0120 01:43:23.699088 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:23.701451 containerd[1737]: time="2026-01-20T01:43:23.701415872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:43:23.963628 containerd[1737]: time="2026-01-20T01:43:23.962733007Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:23.966410 containerd[1737]: time="2026-01-20T01:43:23.966357728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:43:23.966495 containerd[1737]: time="2026-01-20T01:43:23.966458888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:43:23.966648 kubelet[3217]: E0120 01:43:23.966614 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:23.966700 kubelet[3217]: E0120 01:43:23.966658 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:23.966818 kubelet[3217]: E0120 01:43:23.966765 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:23.968078 kubelet[3217]: E0120 01:43:23.968043 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:43:25.417815 containerd[1737]: time="2026-01-20T01:43:25.417740314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:25.651965 containerd[1737]: time="2026-01-20T01:43:25.651873083Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:25.654688 containerd[1737]: time="2026-01-20T01:43:25.654600364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:25.654907 containerd[1737]: time="2026-01-20T01:43:25.654652244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:25.654978 kubelet[3217]: E0120 01:43:25.654885 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:25.654978 kubelet[3217]: E0120 01:43:25.654931 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:25.656227 kubelet[3217]: E0120 01:43:25.655074 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddcnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-588969c7f9-g5sn6_calico-apiserver(8c28c5ae-f540-4875-a2fd-481f9d148cbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:25.656481 kubelet[3217]: E0120 01:43:25.656375 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:43:26.429820 kubelet[3217]: E0120 01:43:26.428756 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:43:26.430313 kubelet[3217]: E0120 01:43:26.430277 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:43:27.417845 kubelet[3217]: E0120 01:43:27.417749 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:43:28.421178 kubelet[3217]: E0120 01:43:28.420903 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:43:34.421596 kubelet[3217]: E0120 01:43:34.421546 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:43:36.098546 systemd[1]: run-containerd-runc-k8s.io-035fc14488e5de1e7d0ab0869d29ae9118a681e464ef2d52aef9fa4d500dc2d2-runc.0kdKoq.mount: Deactivated successfully. Jan 20 01:43:38.422862 kubelet[3217]: E0120 01:43:38.421174 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:43:38.422862 kubelet[3217]: E0120 01:43:38.421626 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:43:38.427847 kubelet[3217]: E0120 01:43:38.427713 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:43:38.433180 kubelet[3217]: E0120 01:43:38.433035 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:43:39.417295 kubelet[3217]: E0120 01:43:39.417231 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:43:41.416977 kubelet[3217]: E0120 01:43:41.416935 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:43:41.535074 systemd[1]: Started sshd@7-10.200.20.17:22-10.200.16.10:33416.service - OpenSSH per-connection server daemon (10.200.16.10:33416). Jan 20 01:43:41.991443 sshd[6247]: Accepted publickey for core from 10.200.16.10 port 33416 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:43:41.994329 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:41.999726 systemd-logind[1707]: New session 10 of user core. Jan 20 01:43:42.003924 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:43:42.400545 sshd[6247]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:42.403739 systemd[1]: sshd@7-10.200.20.17:22-10.200.16.10:33416.service: Deactivated successfully. Jan 20 01:43:42.406432 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:43:42.407195 systemd-logind[1707]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:43:42.409407 systemd-logind[1707]: Removed session 10. Jan 20 01:43:45.417650 kubelet[3217]: E0120 01:43:45.417212 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:43:47.515053 systemd[1]: Started sshd@8-10.200.20.17:22-10.200.16.10:33428.service - OpenSSH per-connection server daemon (10.200.16.10:33428). Jan 20 01:43:48.003253 sshd[6262]: Accepted publickey for core from 10.200.16.10 port 33428 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:43:48.004114 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:48.009658 systemd-logind[1707]: New session 11 of user core. Jan 20 01:43:48.013922 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:43:48.447907 sshd[6262]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:48.452025 systemd[1]: sshd@8-10.200.20.17:22-10.200.16.10:33428.service: Deactivated successfully. Jan 20 01:43:48.453967 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:43:48.454667 systemd-logind[1707]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:43:48.455671 systemd-logind[1707]: Removed session 11. Jan 20 01:43:49.417924 kubelet[3217]: E0120 01:43:49.417743 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:43:50.418052 kubelet[3217]: E0120 01:43:50.417940 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:43:51.417096 kubelet[3217]: E0120 01:43:51.416639 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:43:52.420303 kubelet[3217]: E0120 01:43:52.419950 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:43:53.416741 kubelet[3217]: E0120 01:43:53.416665 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:43:53.529017 systemd[1]: Started sshd@9-10.200.20.17:22-10.200.16.10:52770.service - OpenSSH per-connection server daemon (10.200.16.10:52770). Jan 20 01:43:53.982970 sshd[6276]: Accepted publickey for core from 10.200.16.10 port 52770 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:43:53.983897 sshd[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:53.989843 systemd-logind[1707]: New session 12 of user core. Jan 20 01:43:53.992978 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:43:54.392271 sshd[6276]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:54.396303 systemd-logind[1707]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:43:54.397250 systemd[1]: sshd@9-10.200.20.17:22-10.200.16.10:52770.service: Deactivated successfully. Jan 20 01:43:54.400044 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:43:54.401168 systemd-logind[1707]: Removed session 12. Jan 20 01:43:54.418197 containerd[1737]: time="2026-01-20T01:43:54.417892951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:43:54.686052 containerd[1737]: time="2026-01-20T01:43:54.685460577Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:54.688302 containerd[1737]: time="2026-01-20T01:43:54.688209338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:43:54.688483 containerd[1737]: time="2026-01-20T01:43:54.688435138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:54.688624 kubelet[3217]: E0120 01:43:54.688583 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:54.688907 kubelet[3217]: E0120 01:43:54.688635 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:54.688907 kubelet[3217]: E0120 01:43:54.688767 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fqhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qkv75_calico-system(68f5545e-7661-40cf-baeb-c5c30a862135): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:54.690230 kubelet[3217]: E0120 01:43:54.690192 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:43:56.417204 kubelet[3217]: E0120 01:43:56.417063 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:43:59.480621 systemd[1]: Started sshd@10-10.200.20.17:22-10.200.16.10:52786.service - OpenSSH per-connection server daemon (10.200.16.10:52786). Jan 20 01:43:59.974808 sshd[6298]: Accepted publickey for core from 10.200.16.10 port 52786 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:43:59.975685 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:59.984133 systemd-logind[1707]: New session 13 of user core. Jan 20 01:43:59.986655 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:44:00.406985 sshd[6298]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:00.411103 systemd[1]: sshd@10-10.200.20.17:22-10.200.16.10:52786.service: Deactivated successfully. Jan 20 01:44:00.416164 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:44:00.418972 systemd-logind[1707]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:44:00.420712 systemd-logind[1707]: Removed session 13. Jan 20 01:44:00.511277 systemd[1]: Started sshd@11-10.200.20.17:22-10.200.16.10:38848.service - OpenSSH per-connection server daemon (10.200.16.10:38848). Jan 20 01:44:01.005740 sshd[6314]: Accepted publickey for core from 10.200.16.10 port 38848 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:01.006734 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:01.012770 systemd-logind[1707]: New session 14 of user core. Jan 20 01:44:01.016174 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:44:01.458208 sshd[6314]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:01.461765 systemd[1]: sshd@11-10.200.20.17:22-10.200.16.10:38848.service: Deactivated successfully. Jan 20 01:44:01.463806 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:44:01.465421 systemd-logind[1707]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:44:01.467203 systemd-logind[1707]: Removed session 14. Jan 20 01:44:01.554132 systemd[1]: Started sshd@12-10.200.20.17:22-10.200.16.10:38864.service - OpenSSH per-connection server daemon (10.200.16.10:38864). Jan 20 01:44:02.043058 sshd[6325]: Accepted publickey for core from 10.200.16.10 port 38864 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:02.044564 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:02.049627 systemd-logind[1707]: New session 15 of user core. Jan 20 01:44:02.060422 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:44:02.420985 containerd[1737]: time="2026-01-20T01:44:02.420838332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:02.470510 sshd[6325]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:02.476752 systemd[1]: sshd@12-10.200.20.17:22-10.200.16.10:38864.service: Deactivated successfully. Jan 20 01:44:02.479907 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:44:02.481598 systemd-logind[1707]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:44:02.482816 systemd-logind[1707]: Removed session 15. Jan 20 01:44:02.672683 containerd[1737]: time="2026-01-20T01:44:02.672523783Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:02.675194 containerd[1737]: time="2026-01-20T01:44:02.675150744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:02.675273 containerd[1737]: time="2026-01-20T01:44:02.675255704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:02.675469 kubelet[3217]: E0120 01:44:02.675430 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:02.675746 kubelet[3217]: E0120 01:44:02.675481 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:02.675746 kubelet[3217]: E0120 01:44:02.675608 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgvkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-588969c7f9-dsqq7_calico-apiserver(d821eeb9-a64e-4dc2-bbef-b0976a3bf49a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:02.677170 kubelet[3217]: E0120 01:44:02.677130 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:44:04.417988 containerd[1737]: time="2026-01-20T01:44:04.417940818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:44:04.675505 containerd[1737]: time="2026-01-20T01:44:04.675235311Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:04.677731 containerd[1737]: time="2026-01-20T01:44:04.677642511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:44:04.677731 containerd[1737]: time="2026-01-20T01:44:04.677719471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:44:04.677936 kubelet[3217]: E0120 01:44:04.677898 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:44:04.678201 kubelet[3217]: E0120 01:44:04.677948 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:44:04.678201 kubelet[3217]: E0120 01:44:04.678126 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8f17a3a521ca43d5a97871bb0e325b25,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f58966dbf-54hk5_calico-system(a7a9b064-5e91-49bb-b0db-fcf6fce9b0be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:04.679672 containerd[1737]: time="2026-01-20T01:44:04.678686992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:05.028360 containerd[1737]: time="2026-01-20T01:44:05.027841823Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:05.031514 containerd[1737]: time="2026-01-20T01:44:05.031416943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:05.031514 containerd[1737]: time="2026-01-20T01:44:05.031505703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:05.031811 kubelet[3217]: E0120 01:44:05.031766 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:05.031887 kubelet[3217]: E0120 01:44:05.031822 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:05.032057 kubelet[3217]: E0120 01:44:05.032013 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xth98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7dd6fbd444-88ccv_calico-apiserver(97a804d9-65a3-4df8-a009-6289887849fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:05.032756 containerd[1737]: time="2026-01-20T01:44:05.032561864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:44:05.033744 kubelet[3217]: E0120 01:44:05.033666 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:44:05.288266 containerd[1737]: time="2026-01-20T01:44:05.288132996Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:05.290716 containerd[1737]: time="2026-01-20T01:44:05.290661676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:44:05.290830 containerd[1737]: time="2026-01-20T01:44:05.290757996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:44:05.292023 kubelet[3217]: E0120 01:44:05.291969 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:44:05.292104 kubelet[3217]: E0120 01:44:05.292020 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:44:05.292320 containerd[1737]: time="2026-01-20T01:44:05.292297117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:44:05.292678 kubelet[3217]: E0120 01:44:05.292636 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:05.564530 containerd[1737]: time="2026-01-20T01:44:05.564145332Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:05.566772 containerd[1737]: time="2026-01-20T01:44:05.566727732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:44:05.566911 containerd[1737]: time="2026-01-20T01:44:05.566831932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:44:05.567603 kubelet[3217]: E0120 01:44:05.567040 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:44:05.567603 kubelet[3217]: E0120 01:44:05.567085 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:44:05.567603 kubelet[3217]: E0120 01:44:05.567273 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f58966dbf-54hk5_calico-system(a7a9b064-5e91-49bb-b0db-fcf6fce9b0be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:05.568203 containerd[1737]: time="2026-01-20T01:44:05.567861813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:44:05.569010 kubelet[3217]: E0120 01:44:05.568944 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:44:05.829439 containerd[1737]: time="2026-01-20T01:44:05.829328506Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:05.832085 containerd[1737]: time="2026-01-20T01:44:05.832043906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:44:05.832292 containerd[1737]: time="2026-01-20T01:44:05.832089946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:44:05.832445 kubelet[3217]: E0120 01:44:05.832407 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:44:05.832715 kubelet[3217]: E0120 01:44:05.832457 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:44:05.832715 kubelet[3217]: E0120 01:44:05.832567 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gxvd_calico-system(44bdd32b-1d8e-4e5b-bb73-1e59535dcb96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:05.833936 kubelet[3217]: E0120 01:44:05.833905 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:44:06.418868 containerd[1737]: time="2026-01-20T01:44:06.418708266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:06.661760 containerd[1737]: time="2026-01-20T01:44:06.661557955Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:06.665378 containerd[1737]: time="2026-01-20T01:44:06.665271236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:06.665378 containerd[1737]: time="2026-01-20T01:44:06.665338876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:06.665847 kubelet[3217]: E0120 01:44:06.665633 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:06.665847 kubelet[3217]: E0120 01:44:06.665677 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:06.666029 kubelet[3217]: E0120 01:44:06.665960 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddcnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-588969c7f9-g5sn6_calico-apiserver(8c28c5ae-f540-4875-a2fd-481f9d148cbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:06.667248 kubelet[3217]: E0120 01:44:06.667214 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:44:07.418430 containerd[1737]: time="2026-01-20T01:44:07.418179061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:44:07.565924 systemd[1]: Started sshd@13-10.200.20.17:22-10.200.16.10:38876.service - OpenSSH per-connection server daemon (10.200.16.10:38876). Jan 20 01:44:07.677569 containerd[1737]: time="2026-01-20T01:44:07.677426853Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:07.683813 containerd[1737]: time="2026-01-20T01:44:07.683709094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:44:07.683813 containerd[1737]: time="2026-01-20T01:44:07.683776254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:44:07.684162 kubelet[3217]: E0120 01:44:07.684110 3217 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:44:07.684513 kubelet[3217]: E0120 01:44:07.684166 3217 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:44:07.684513 kubelet[3217]: E0120 01:44:07.684299 3217 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhlh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59886bc69c-2p6tc_calico-system(a793b124-6073-4604-9c24-ad5326cb3836): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:07.685836 kubelet[3217]: E0120 01:44:07.685790 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:44:08.059892 sshd[6360]: Accepted publickey for core from 10.200.16.10 port 38876 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:08.060877 sshd[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:08.064896 systemd-logind[1707]: New session 16 of user core. Jan 20 01:44:08.070150 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:44:08.422477 kubelet[3217]: E0120 01:44:08.422362 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:44:08.530672 sshd[6360]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:08.533248 systemd-logind[1707]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:44:08.535053 systemd[1]: sshd@13-10.200.20.17:22-10.200.16.10:38876.service: Deactivated successfully. Jan 20 01:44:08.539365 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:44:08.540817 systemd-logind[1707]: Removed session 16. Jan 20 01:44:13.625265 systemd[1]: Started sshd@14-10.200.20.17:22-10.200.16.10:52250.service - OpenSSH per-connection server daemon (10.200.16.10:52250). Jan 20 01:44:14.110688 sshd[6388]: Accepted publickey for core from 10.200.16.10 port 52250 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:14.112128 sshd[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:14.116198 systemd-logind[1707]: New session 17 of user core. Jan 20 01:44:14.122949 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:44:14.535308 sshd[6388]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:14.539895 systemd[1]: sshd@14-10.200.20.17:22-10.200.16.10:52250.service: Deactivated successfully. Jan 20 01:44:14.542334 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:44:14.543486 systemd-logind[1707]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:44:14.544442 systemd-logind[1707]: Removed session 17. Jan 20 01:44:16.419172 kubelet[3217]: E0120 01:44:16.419110 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:44:16.420813 kubelet[3217]: E0120 01:44:16.420681 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:44:19.419794 kubelet[3217]: E0120 01:44:19.417220 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:44:19.620868 systemd[1]: Started sshd@15-10.200.20.17:22-10.200.16.10:33196.service - OpenSSH per-connection server daemon (10.200.16.10:33196). Jan 20 01:44:20.071816 sshd[6414]: Accepted publickey for core from 10.200.16.10 port 33196 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:20.074227 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:20.083032 systemd-logind[1707]: New session 18 of user core. Jan 20 01:44:20.086947 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:44:20.421615 kubelet[3217]: E0120 01:44:20.421559 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:44:20.422025 kubelet[3217]: E0120 01:44:20.421666 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:44:20.478328 sshd[6414]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:20.483965 systemd-logind[1707]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:44:20.484257 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:44:20.486328 systemd[1]: sshd@15-10.200.20.17:22-10.200.16.10:33196.service: Deactivated successfully. Jan 20 01:44:20.490282 systemd-logind[1707]: Removed session 18. Jan 20 01:44:22.418524 kubelet[3217]: E0120 01:44:22.418204 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:44:22.420384 kubelet[3217]: E0120 01:44:22.420244 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:44:25.568348 systemd[1]: Started sshd@16-10.200.20.17:22-10.200.16.10:33212.service - OpenSSH per-connection server daemon (10.200.16.10:33212). Jan 20 01:44:26.012802 sshd[6429]: Accepted publickey for core from 10.200.16.10 port 33212 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:26.014575 sshd[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:26.020733 systemd-logind[1707]: New session 19 of user core. Jan 20 01:44:26.024950 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:44:26.431313 sshd[6429]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:26.438038 systemd-logind[1707]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:44:26.438897 systemd[1]: sshd@16-10.200.20.17:22-10.200.16.10:33212.service: Deactivated successfully. Jan 20 01:44:26.445092 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:44:26.446253 systemd-logind[1707]: Removed session 19. Jan 20 01:44:26.541125 systemd[1]: Started sshd@17-10.200.20.17:22-10.200.16.10:33220.service - OpenSSH per-connection server daemon (10.200.16.10:33220). Jan 20 01:44:26.992144 sshd[6442]: Accepted publickey for core from 10.200.16.10 port 33220 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:26.993554 sshd[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:27.001119 systemd-logind[1707]: New session 20 of user core. Jan 20 01:44:27.007948 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:44:27.536541 sshd[6442]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:27.541844 systemd-logind[1707]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:44:27.542829 systemd[1]: sshd@17-10.200.20.17:22-10.200.16.10:33220.service: Deactivated successfully. Jan 20 01:44:27.546210 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:44:27.549968 systemd-logind[1707]: Removed session 20. Jan 20 01:44:27.630543 systemd[1]: Started sshd@18-10.200.20.17:22-10.200.16.10:33232.service - OpenSSH per-connection server daemon (10.200.16.10:33232). Jan 20 01:44:28.121814 sshd[6453]: Accepted publickey for core from 10.200.16.10 port 33232 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:28.123503 sshd[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:28.130378 systemd-logind[1707]: New session 21 of user core. Jan 20 01:44:28.136431 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:44:28.419968 kubelet[3217]: E0120 01:44:28.419910 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:44:28.423307 kubelet[3217]: E0120 01:44:28.422089 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:44:29.244346 sshd[6453]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:29.248063 systemd-logind[1707]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:44:29.249261 systemd[1]: sshd@18-10.200.20.17:22-10.200.16.10:33232.service: Deactivated successfully. Jan 20 01:44:29.251832 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:44:29.253308 systemd-logind[1707]: Removed session 21. Jan 20 01:44:29.346877 systemd[1]: Started sshd@19-10.200.20.17:22-10.200.16.10:33240.service - OpenSSH per-connection server daemon (10.200.16.10:33240). Jan 20 01:44:29.804805 sshd[6476]: Accepted publickey for core from 10.200.16.10 port 33240 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:29.806299 sshd[6476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:29.810905 systemd-logind[1707]: New session 22 of user core. Jan 20 01:44:29.815954 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:44:30.335835 sshd[6476]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:30.339894 systemd-logind[1707]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:44:30.340165 systemd[1]: sshd@19-10.200.20.17:22-10.200.16.10:33240.service: Deactivated successfully. Jan 20 01:44:30.342167 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:44:30.343244 systemd-logind[1707]: Removed session 22. Jan 20 01:44:30.418377 kubelet[3217]: E0120 01:44:30.418337 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:44:30.436129 systemd[1]: Started sshd@20-10.200.20.17:22-10.200.16.10:43380.service - OpenSSH per-connection server daemon (10.200.16.10:43380). Jan 20 01:44:30.929617 sshd[6486]: Accepted publickey for core from 10.200.16.10 port 43380 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:30.931378 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:30.938195 systemd-logind[1707]: New session 23 of user core. Jan 20 01:44:30.942943 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:44:31.368687 sshd[6486]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:31.373373 systemd[1]: sshd@20-10.200.20.17:22-10.200.16.10:43380.service: Deactivated successfully. Jan 20 01:44:31.378748 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:44:31.380427 systemd-logind[1707]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:44:31.381324 systemd-logind[1707]: Removed session 23. Jan 20 01:44:32.418932 kubelet[3217]: E0120 01:44:32.418863 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:44:32.419639 kubelet[3217]: E0120 01:44:32.419062 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:44:34.421969 kubelet[3217]: E0120 01:44:34.421900 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:44:34.422620 kubelet[3217]: E0120 01:44:34.422254 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:44:36.092553 systemd[1]: run-containerd-runc-k8s.io-035fc14488e5de1e7d0ab0869d29ae9118a681e464ef2d52aef9fa4d500dc2d2-runc.32EybI.mount: Deactivated successfully. Jan 20 01:44:36.458603 systemd[1]: Started sshd@21-10.200.20.17:22-10.200.16.10:43392.service - OpenSSH per-connection server daemon (10.200.16.10:43392). Jan 20 01:44:36.946872 sshd[6522]: Accepted publickey for core from 10.200.16.10 port 43392 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:36.949186 sshd[6522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:36.958301 systemd-logind[1707]: New session 24 of user core. Jan 20 01:44:36.962963 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:44:37.392001 sshd[6522]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:37.395522 systemd[1]: sshd@21-10.200.20.17:22-10.200.16.10:43392.service: Deactivated successfully. Jan 20 01:44:37.400582 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:44:37.403012 systemd-logind[1707]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:44:37.405102 systemd-logind[1707]: Removed session 24. Jan 20 01:44:40.418004 kubelet[3217]: E0120 01:44:40.416868 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:44:41.418411 kubelet[3217]: E0120 01:44:41.417675 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:44:42.487066 systemd[1]: Started sshd@22-10.200.20.17:22-10.200.16.10:47914.service - OpenSSH per-connection server daemon (10.200.16.10:47914). Jan 20 01:44:42.976034 sshd[6535]: Accepted publickey for core from 10.200.16.10 port 47914 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:42.976916 sshd[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:42.981631 systemd-logind[1707]: New session 25 of user core. Jan 20 01:44:42.986947 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:44:43.409151 sshd[6535]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:43.413627 systemd-logind[1707]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:44:43.415618 systemd[1]: sshd@22-10.200.20.17:22-10.200.16.10:47914.service: Deactivated successfully. Jan 20 01:44:43.417931 kubelet[3217]: E0120 01:44:43.417897 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:44:43.420460 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:44:43.424147 systemd-logind[1707]: Removed session 25. Jan 20 01:44:44.419128 kubelet[3217]: E0120 01:44:44.418932 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:44:45.417880 kubelet[3217]: E0120 01:44:45.416835 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:44:45.417880 kubelet[3217]: E0120 01:44:45.416865 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:44:47.417481 kubelet[3217]: E0120 01:44:47.417433 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:44:48.503058 systemd[1]: Started sshd@23-10.200.20.17:22-10.200.16.10:47922.service - OpenSSH per-connection server daemon (10.200.16.10:47922). Jan 20 01:44:48.997917 sshd[6550]: Accepted publickey for core from 10.200.16.10 port 47922 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:48.999532 sshd[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:49.005402 systemd-logind[1707]: New session 26 of user core. Jan 20 01:44:49.009084 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:44:49.414013 sshd[6550]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:49.416709 systemd-logind[1707]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:44:49.417225 systemd[1]: sshd@23-10.200.20.17:22-10.200.16.10:47922.service: Deactivated successfully. Jan 20 01:44:49.419674 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:44:49.422589 systemd-logind[1707]: Removed session 26. Jan 20 01:44:51.417013 kubelet[3217]: E0120 01:44:51.416105 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7dd6fbd444-88ccv" podUID="97a804d9-65a3-4df8-a009-6289887849fb" Jan 20 01:44:54.508673 systemd[1]: Started sshd@24-10.200.20.17:22-10.200.16.10:50450.service - OpenSSH per-connection server daemon (10.200.16.10:50450). Jan 20 01:44:55.000571 sshd[6564]: Accepted publickey for core from 10.200.16.10 port 50450 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:55.002850 sshd[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:55.008129 systemd-logind[1707]: New session 27 of user core. Jan 20 01:44:55.013679 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:44:55.416818 kubelet[3217]: E0120 01:44:55.415887 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-g5sn6" podUID="8c28c5ae-f540-4875-a2fd-481f9d148cbd" Jan 20 01:44:55.434454 sshd[6564]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:55.437950 systemd[1]: sshd@24-10.200.20.17:22-10.200.16.10:50450.service: Deactivated successfully. Jan 20 01:44:55.440140 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:44:55.442130 systemd-logind[1707]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:44:55.443403 systemd-logind[1707]: Removed session 27. Jan 20 01:44:56.419224 kubelet[3217]: E0120 01:44:56.418914 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-588969c7f9-dsqq7" podUID="d821eeb9-a64e-4dc2-bbef-b0976a3bf49a" Jan 20 01:44:56.421063 kubelet[3217]: E0120 01:44:56.421015 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f58966dbf-54hk5" podUID="a7a9b064-5e91-49bb-b0db-fcf6fce9b0be" Jan 20 01:44:58.418539 kubelet[3217]: E0120 01:44:58.418409 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qkv75" podUID="68f5545e-7661-40cf-baeb-c5c30a862135" Jan 20 01:44:58.418539 kubelet[3217]: E0120 01:44:58.418472 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59886bc69c-2p6tc" podUID="a793b124-6073-4604-9c24-ad5326cb3836" Jan 20 01:44:59.417340 kubelet[3217]: E0120 01:44:59.417163 3217 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gxvd" podUID="44bdd32b-1d8e-4e5b-bb73-1e59535dcb96" Jan 20 01:45:00.521274 systemd[1]: Started sshd@25-10.200.20.17:22-10.200.16.10:59350.service - OpenSSH per-connection server daemon (10.200.16.10:59350). Jan 20 01:45:00.969878 sshd[6579]: Accepted publickey for core from 10.200.16.10 port 59350 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:45:00.970741 sshd[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:00.974758 systemd-logind[1707]: New session 28 of user core. Jan 20 01:45:00.979922 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 01:45:01.371885 sshd[6579]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:01.376259 systemd[1]: sshd@25-10.200.20.17:22-10.200.16.10:59350.service: Deactivated successfully. Jan 20 01:45:01.376310 systemd-logind[1707]: Session 28 logged out. Waiting for processes to exit. Jan 20 01:45:01.378375 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 01:45:01.380694 systemd-logind[1707]: Removed session 28.