Jan 17 00:03:35.188103 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:03:35.188123 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:03:35.188131 kernel: KASLR enabled Jan 17 00:03:35.188137 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:03:35.188144 kernel: printk: bootconsole [pl11] enabled Jan 17 00:03:35.188150 kernel: efi: EFI v2.7 by EDK II Jan 17 00:03:35.188157 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:03:35.188163 kernel: random: crng init done Jan 17 00:03:35.188169 kernel: ACPI: Early table checksum verification disabled Jan 17 00:03:35.188175 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:03:35.188181 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188187 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188194 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:03:35.188201 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188208 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188214 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188221 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188229 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188236 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188242 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:03:35.188249 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188255 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:03:35.188262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:03:35.188268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:03:35.188274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:03:35.188281 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:03:35.188287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:03:35.188293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:03:35.188301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:03:35.188308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:03:35.188314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:03:35.188320 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:03:35.188327 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:03:35.188333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:03:35.188339 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 17 00:03:35.188346 kernel: Zone ranges: Jan 17 00:03:35.188352 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:03:35.188359 kernel: DMA32 empty Jan 17 00:03:35.188365 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:03:35.188372 kernel: Movable zone start for each node Jan 17 00:03:35.188382 kernel: Early memory node ranges Jan 17 00:03:35.188389 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:03:35.188396 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:03:35.188403 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:03:35.188410 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:03:35.188418 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:03:35.188425 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:03:35.188431 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:03:35.188438 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:03:35.188445 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:03:35.188452 kernel: psci: probing for conduit method from ACPI. Jan 17 00:03:35.188459 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:03:35.188465 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:03:35.188472 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:03:35.188479 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:03:35.189649 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:03:35.189660 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:03:35.189672 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:03:35.189679 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:03:35.189686 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:03:35.189693 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:03:35.189699 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:03:35.189706 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:03:35.189713 kernel: CPU features: detected: Spectre-BHB Jan 17 00:03:35.189720 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:03:35.189727 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:03:35.189734 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:03:35.189741 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:03:35.189749 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:03:35.189756 kernel: alternatives: applying boot alternatives Jan 17 00:03:35.189764 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:03:35.189771 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:03:35.189778 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:03:35.189785 kernel: Fallback order for Node 0: 0 Jan 17 00:03:35.189792 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:03:35.189798 kernel: Policy zone: Normal Jan 17 00:03:35.189805 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:03:35.189812 kernel: software IO TLB: area num 2. Jan 17 00:03:35.189819 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:03:35.189827 kernel: Memory: 3982632K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211528K reserved, 0K cma-reserved) Jan 17 00:03:35.189834 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:03:35.189841 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:03:35.189848 kernel: rcu: RCU event tracing is enabled. Jan 17 00:03:35.189855 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:03:35.189862 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:03:35.189869 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:03:35.189876 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:03:35.189883 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:03:35.189889 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:03:35.189896 kernel: GICv3: 960 SPIs implemented Jan 17 00:03:35.189904 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:03:35.189911 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:03:35.189918 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:03:35.189925 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:03:35.189931 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:03:35.189938 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:03:35.189945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:03:35.189952 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:03:35.189959 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:03:35.189965 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:03:35.189973 kernel: Console: colour dummy device 80x25 Jan 17 00:03:35.189981 kernel: printk: console [tty1] enabled Jan 17 00:03:35.189988 kernel: ACPI: Core revision 20230628 Jan 17 00:03:35.189995 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:03:35.190002 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:03:35.190009 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:03:35.190016 kernel: landlock: Up and running. Jan 17 00:03:35.190023 kernel: SELinux: Initializing. Jan 17 00:03:35.190030 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:03:35.190037 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:03:35.190045 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:03:35.190052 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:03:35.190060 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:03:35.190067 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:03:35.190074 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:03:35.190081 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:03:35.190088 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:03:35.190095 kernel: Remapping and enabling EFI services. Jan 17 00:03:35.190108 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:03:35.190116 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:03:35.190124 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:03:35.190131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:03:35.190139 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:03:35.190147 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:03:35.190154 kernel: SMP: Total of 2 processors activated. Jan 17 00:03:35.190162 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:03:35.190169 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:03:35.190178 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:03:35.190185 kernel: CPU features: detected: CRC32 instructions Jan 17 00:03:35.190193 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:03:35.190200 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:03:35.190207 kernel: CPU features: detected: Privileged Access Never Jan 17 00:03:35.190214 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:03:35.190222 kernel: alternatives: applying system-wide alternatives Jan 17 00:03:35.190229 kernel: devtmpfs: initialized Jan 17 00:03:35.190236 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:03:35.190245 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:03:35.190252 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:03:35.190260 kernel: SMBIOS 3.1.0 present. Jan 17 00:03:35.190267 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:03:35.190274 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:03:35.190282 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:03:35.190289 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:03:35.190297 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:03:35.190304 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:03:35.190313 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:03:35.190320 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:03:35.190328 kernel: cpuidle: using governor menu Jan 17 00:03:35.190335 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:03:35.190342 kernel: ASID allocator initialised with 32768 entries Jan 17 00:03:35.190350 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:03:35.190357 kernel: Serial: AMBA PL011 UART driver Jan 17 00:03:35.190365 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:03:35.190372 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:03:35.190380 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:03:35.190388 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:03:35.190395 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:03:35.190402 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:03:35.190410 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:03:35.190417 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:03:35.190427 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:03:35.190436 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:03:35.190444 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:03:35.190455 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:03:35.190464 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:03:35.190472 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:03:35.190480 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:03:35.190497 kernel: ACPI: Interpreter enabled Jan 17 00:03:35.190506 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:03:35.190514 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:03:35.190522 kernel: printk: console [ttyAMA0] enabled Jan 17 00:03:35.190535 kernel: printk: bootconsole [pl11] disabled Jan 17 00:03:35.190546 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:03:35.190555 kernel: iommu: Default domain type: Translated Jan 17 00:03:35.190563 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:03:35.190572 kernel: efivars: Registered efivars operations Jan 17 00:03:35.190580 kernel: vgaarb: loaded Jan 17 00:03:35.190588 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:03:35.190595 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:03:35.190602 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:03:35.190609 kernel: pnp: PnP ACPI init Jan 17 00:03:35.190618 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:03:35.190626 kernel: NET: Registered PF_INET protocol family Jan 17 00:03:35.190635 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:03:35.190644 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:03:35.190652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:03:35.190661 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:03:35.190669 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:03:35.190678 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:03:35.190686 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:03:35.190697 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:03:35.190705 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:03:35.190713 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:03:35.190722 kernel: kvm [1]: HYP mode not available Jan 17 00:03:35.190729 kernel: Initialise system trusted keyrings Jan 17 00:03:35.190737 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:03:35.190744 kernel: Key type asymmetric registered Jan 17 00:03:35.190751 kernel: Asymmetric key parser 'x509' registered Jan 17 00:03:35.190758 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:03:35.190767 kernel: io scheduler mq-deadline registered Jan 17 00:03:35.190774 kernel: io scheduler kyber registered Jan 17 00:03:35.190782 kernel: io scheduler bfq registered Jan 17 00:03:35.190789 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:03:35.190796 kernel: thunder_xcv, ver 1.0 Jan 17 00:03:35.190803 kernel: thunder_bgx, ver 1.0 Jan 17 00:03:35.190810 kernel: nicpf, ver 1.0 Jan 17 00:03:35.190817 kernel: nicvf, ver 1.0 Jan 17 00:03:35.190952 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:03:35.191026 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:03:34 UTC (1768608214) Jan 17 00:03:35.191037 kernel: efifb: probing for efifb Jan 17 00:03:35.191044 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:03:35.191052 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:03:35.191059 kernel: efifb: scrolling: redraw Jan 17 00:03:35.191066 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:03:35.191074 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:03:35.191081 kernel: fb0: EFI VGA frame buffer device Jan 17 00:03:35.191091 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:03:35.191099 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:03:35.191106 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:03:35.191113 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:03:35.191121 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:03:35.191128 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:03:35.191135 kernel: Segment Routing with IPv6 Jan 17 00:03:35.191143 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:03:35.191150 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:03:35.191159 kernel: Key type dns_resolver registered Jan 17 00:03:35.191166 kernel: registered taskstats version 1 Jan 17 00:03:35.191173 kernel: Loading compiled-in X.509 certificates Jan 17 00:03:35.191181 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:03:35.191188 kernel: Key type .fscrypt registered Jan 17 00:03:35.191195 kernel: Key type fscrypt-provisioning registered Jan 17 00:03:35.191202 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:03:35.191209 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:03:35.191217 kernel: ima: No architecture policies found Jan 17 00:03:35.191226 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:03:35.191233 kernel: clk: Disabling unused clocks Jan 17 00:03:35.191240 kernel: Freeing unused kernel memory: 39424K Jan 17 00:03:35.191248 kernel: Run /init as init process Jan 17 00:03:35.191255 kernel: with arguments: Jan 17 00:03:35.191262 kernel: /init Jan 17 00:03:35.191269 kernel: with environment: Jan 17 00:03:35.191276 kernel: HOME=/ Jan 17 00:03:35.191284 kernel: TERM=linux Jan 17 00:03:35.191293 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:03:35.191304 systemd[1]: Detected virtualization microsoft. Jan 17 00:03:35.191312 systemd[1]: Detected architecture arm64. Jan 17 00:03:35.191320 systemd[1]: Running in initrd. Jan 17 00:03:35.191328 systemd[1]: No hostname configured, using default hostname. Jan 17 00:03:35.191335 systemd[1]: Hostname set to . Jan 17 00:03:35.191343 systemd[1]: Initializing machine ID from random generator. Jan 17 00:03:35.191353 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:03:35.191361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:03:35.191369 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:03:35.191378 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:03:35.191386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:03:35.191394 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:03:35.191402 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:03:35.191412 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:03:35.191421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:03:35.191429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:03:35.191437 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:03:35.191445 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:03:35.191453 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:03:35.191461 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:03:35.191469 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:03:35.191477 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:03:35.195076 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:03:35.195093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:03:35.195101 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:03:35.195110 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:03:35.195118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:03:35.195127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:03:35.195135 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:03:35.195143 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:03:35.195158 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:03:35.195166 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:03:35.195174 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:03:35.195182 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:03:35.195190 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:03:35.195226 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 00:03:35.195248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:35.195257 systemd-journald[217]: Journal started Jan 17 00:03:35.195276 systemd-journald[217]: Runtime Journal (/run/log/journal/8aea3ac1837a416c9436dabba461290e) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:03:35.195877 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 00:03:35.212612 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:03:35.222499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:03:35.224523 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:03:35.236182 kernel: Bridge firewalling registered Jan 17 00:03:35.231290 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 00:03:35.232149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:03:35.241872 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:03:35.248718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:03:35.258194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:35.276828 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:03:35.288772 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:03:35.299775 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:03:35.326748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:03:35.333509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:03:35.343178 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:03:35.348207 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:03:35.360241 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:03:35.381666 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:03:35.388628 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:03:35.404285 dracut-cmdline[250]: dracut-dracut-053 Jan 17 00:03:35.411645 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:03:35.409784 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:03:35.446005 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:03:35.476759 systemd-resolved[254]: Positive Trust Anchors: Jan 17 00:03:35.476772 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:03:35.476804 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:03:35.482107 systemd-resolved[254]: Defaulting to hostname 'linux'. Jan 17 00:03:35.482978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:03:35.494293 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:03:35.543499 kernel: SCSI subsystem initialized Jan 17 00:03:35.550497 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:03:35.560506 kernel: iscsi: registered transport (tcp) Jan 17 00:03:35.574492 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:03:35.574512 kernel: QLogic iSCSI HBA Driver Jan 17 00:03:35.608753 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:03:35.619690 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:03:35.651796 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:03:35.651858 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:03:35.656665 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:03:35.703512 kernel: raid6: neonx8 gen() 15807 MB/s Jan 17 00:03:35.722492 kernel: raid6: neonx4 gen() 15691 MB/s Jan 17 00:03:35.741493 kernel: raid6: neonx2 gen() 13264 MB/s Jan 17 00:03:35.761490 kernel: raid6: neonx1 gen() 10562 MB/s Jan 17 00:03:35.780489 kernel: raid6: int64x8 gen() 6978 MB/s Jan 17 00:03:35.799489 kernel: raid6: int64x4 gen() 7353 MB/s Jan 17 00:03:35.819489 kernel: raid6: int64x2 gen() 6147 MB/s Jan 17 00:03:35.841605 kernel: raid6: int64x1 gen() 5072 MB/s Jan 17 00:03:35.841624 kernel: raid6: using algorithm neonx8 gen() 15807 MB/s Jan 17 00:03:35.863938 kernel: raid6: .... xor() 11952 MB/s, rmw enabled Jan 17 00:03:35.863958 kernel: raid6: using neon recovery algorithm Jan 17 00:03:35.874632 kernel: xor: measuring software checksum speed Jan 17 00:03:35.874647 kernel: 8regs : 19769 MB/sec Jan 17 00:03:35.877582 kernel: 32regs : 19679 MB/sec Jan 17 00:03:35.880333 kernel: arm64_neon : 27186 MB/sec Jan 17 00:03:35.883528 kernel: xor: using function: arm64_neon (27186 MB/sec) Jan 17 00:03:35.932505 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:03:35.943111 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:03:35.955632 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:03:35.974572 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jan 17 00:03:35.979044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:03:36.031689 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:03:36.051555 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 17 00:03:36.078458 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:03:36.093896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:03:36.130267 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:03:36.146720 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:03:36.168142 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:03:36.177827 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:03:36.188719 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:03:36.204398 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:03:36.230667 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:03:36.255646 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:03:36.275086 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:03:36.275107 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:03:36.277265 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:03:36.277435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:03:36.349771 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:03:36.349796 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:03:36.349807 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 00:03:36.349817 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:03:36.349826 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:03:36.349844 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 00:03:36.349853 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:03:36.350013 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:03:36.295850 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:03:36.367299 kernel: scsi host1: storvsc_host_t Jan 17 00:03:36.367470 kernel: scsi host0: storvsc_host_t Jan 17 00:03:36.367593 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:03:36.312523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:03:36.379867 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:03:36.312735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:36.391943 kernel: PTP clock support registered Jan 17 00:03:36.360719 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:36.409069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:36.289636 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:03:36.294740 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:03:36.294754 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:03:36.294762 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:03:36.294771 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:03:36.294779 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:03:36.294907 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:03:36.294916 systemd-journald[217]: Time jumped backwards, rotating. Jan 17 00:03:36.294952 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:03:36.425682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:03:36.311178 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:03:36.311356 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:03:36.311445 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:03:36.425777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:36.342702 kernel: hv_netvsc 0022487b-8619-0022-487b-86190022487b eth0: VF slot 1 added Jan 17 00:03:36.342915 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:03:36.343030 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:03:36.343118 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:36.343127 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:03:36.276846 systemd-resolved[254]: Clock change detected. Flushing caches. Jan 17 00:03:36.370298 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:03:36.370318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#255 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:03:36.325049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:36.384340 kernel: hv_pci cd10f029-1840-48df-a83d-bf82dd7b6dec: PCI VMBus probing: Using version 0x10004 Jan 17 00:03:36.365746 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:36.399654 kernel: hv_pci cd10f029-1840-48df-a83d-bf82dd7b6dec: PCI host bridge to bus 1840:00 Jan 17 00:03:36.399802 kernel: pci_bus 1840:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:03:36.385724 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:03:36.414698 kernel: pci_bus 1840:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:03:36.414816 kernel: pci 1840:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:03:36.432582 kernel: pci 1840:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:03:36.444769 kernel: pci 1840:00:02.0: enabling Extended Tags Jan 17 00:03:36.444829 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#199 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:03:36.450673 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:03:36.484036 kernel: pci 1840:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1840:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:03:36.484205 kernel: pci_bus 1840:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:03:36.484296 kernel: pci 1840:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:03:36.522990 kernel: mlx5_core 1840:00:02.0: enabling device (0000 -> 0002) Jan 17 00:03:36.528541 kernel: mlx5_core 1840:00:02.0: firmware version: 16.30.5026 Jan 17 00:03:36.729213 kernel: hv_netvsc 0022487b-8619-0022-487b-86190022487b eth0: VF registering: eth1 Jan 17 00:03:36.729418 kernel: mlx5_core 1840:00:02.0 eth1: joined to eth0 Jan 17 00:03:36.734808 kernel: mlx5_core 1840:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:03:36.744541 kernel: mlx5_core 1840:00:02.0 enP6208s1: renamed from eth1 Jan 17 00:03:36.839567 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:03:36.883654 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (487) Jan 17 00:03:36.897589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:03:36.931234 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:03:36.958545 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (491) Jan 17 00:03:36.970104 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:03:36.975828 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:03:37.007755 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:03:37.029627 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:37.037548 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:37.044538 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:38.055145 disk-uuid[607]: The operation has completed successfully. Jan 17 00:03:38.060279 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:38.126661 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:03:38.130692 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:03:38.153695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:03:38.165130 sh[720]: Success Jan 17 00:03:38.189547 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:03:38.441026 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:03:38.449648 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:03:38.457028 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:03:38.497423 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:03:38.497474 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:03:38.503735 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:03:38.508296 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:03:38.512081 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:03:38.817664 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:03:38.822162 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:03:38.843806 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:03:38.853092 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:03:38.884191 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:38.884254 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:03:38.887732 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:03:38.925551 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:03:38.933561 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:03:38.944977 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:38.954375 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:03:38.964683 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:03:38.980754 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:03:38.992765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:03:39.030455 systemd-networkd[904]: lo: Link UP Jan 17 00:03:39.030466 systemd-networkd[904]: lo: Gained carrier Jan 17 00:03:39.032035 systemd-networkd[904]: Enumeration completed Jan 17 00:03:39.033879 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:03:39.033883 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:03:39.036620 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:03:39.042233 systemd[1]: Reached target network.target - Network. Jan 17 00:03:39.120545 kernel: mlx5_core 1840:00:02.0 enP6208s1: Link up Jan 17 00:03:39.158551 kernel: hv_netvsc 0022487b-8619-0022-487b-86190022487b eth0: Data path switched to VF: enP6208s1 Jan 17 00:03:39.159581 systemd-networkd[904]: enP6208s1: Link UP Jan 17 00:03:39.159676 systemd-networkd[904]: eth0: Link UP Jan 17 00:03:39.159772 systemd-networkd[904]: eth0: Gained carrier Jan 17 00:03:39.159780 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:03:39.181176 systemd-networkd[904]: enP6208s1: Gained carrier Jan 17 00:03:39.194560 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:03:39.918799 ignition[903]: Ignition 2.19.0 Jan 17 00:03:39.918808 ignition[903]: Stage: fetch-offline Jan 17 00:03:39.922891 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:03:39.918843 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:39.918851 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:39.918941 ignition[903]: parsed url from cmdline: "" Jan 17 00:03:39.940741 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:03:39.918944 ignition[903]: no config URL provided Jan 17 00:03:39.918948 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:03:39.918955 ignition[903]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:03:39.918959 ignition[903]: failed to fetch config: resource requires networking Jan 17 00:03:39.919418 ignition[903]: Ignition finished successfully Jan 17 00:03:39.965009 ignition[914]: Ignition 2.19.0 Jan 17 00:03:39.965015 ignition[914]: Stage: fetch Jan 17 00:03:39.965211 ignition[914]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:39.965222 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:39.965322 ignition[914]: parsed url from cmdline: "" Jan 17 00:03:39.965326 ignition[914]: no config URL provided Jan 17 00:03:39.965331 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:03:39.965338 ignition[914]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:03:39.965362 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:03:40.074862 ignition[914]: GET result: OK Jan 17 00:03:40.074948 ignition[914]: config has been read from IMDS userdata Jan 17 00:03:40.075031 ignition[914]: parsing config with SHA512: 943086293b4ce6a7a73782aa06ccae527f8a3f3e693a1dda90f046d77581ad410fb18e75ef8778198ed0085757cf41bcfc0e01226ba2235f71434268b5f65cc5 Jan 17 00:03:40.078661 unknown[914]: fetched base config from "system" Jan 17 00:03:40.079034 ignition[914]: fetch: fetch complete Jan 17 00:03:40.078675 unknown[914]: fetched base config from "system" Jan 17 00:03:40.079038 ignition[914]: fetch: fetch passed Jan 17 00:03:40.078681 unknown[914]: fetched user config from "azure" Jan 17 00:03:40.079078 ignition[914]: Ignition finished successfully Jan 17 00:03:40.084177 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:03:40.102740 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:03:40.122509 ignition[921]: Ignition 2.19.0 Jan 17 00:03:40.122518 ignition[921]: Stage: kargs Jan 17 00:03:40.126556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:03:40.122703 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:40.122713 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:40.123731 ignition[921]: kargs: kargs passed Jan 17 00:03:40.123779 ignition[921]: Ignition finished successfully Jan 17 00:03:40.147718 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:03:40.163397 ignition[927]: Ignition 2.19.0 Jan 17 00:03:40.163411 ignition[927]: Stage: disks Jan 17 00:03:40.169850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:03:40.166457 ignition[927]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:40.176268 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:03:40.166468 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:40.181208 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:03:40.167340 ignition[927]: disks: disks passed Jan 17 00:03:40.190743 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:03:40.167382 ignition[927]: Ignition finished successfully Jan 17 00:03:40.198715 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:03:40.207566 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:03:40.223774 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:03:40.300647 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:03:40.309811 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:03:40.324725 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:03:40.376569 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:03:40.377758 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:03:40.381484 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:03:40.422593 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:03:40.441538 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Jan 17 00:03:40.453259 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:40.453301 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:03:40.456796 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:03:40.458653 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:03:40.465722 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:03:40.482152 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:03:40.476802 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:03:40.476841 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:03:40.501862 systemd-networkd[904]: eth0: Gained IPv6LL Jan 17 00:03:40.503357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:03:40.513371 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:03:40.524747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:03:40.983981 coreos-metadata[961]: Jan 17 00:03:40.983 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:03:40.993063 coreos-metadata[961]: Jan 17 00:03:40.993 INFO Fetch successful Jan 17 00:03:40.993063 coreos-metadata[961]: Jan 17 00:03:40.993 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:03:41.006512 coreos-metadata[961]: Jan 17 00:03:41.004 INFO Fetch successful Jan 17 00:03:41.019582 coreos-metadata[961]: Jan 17 00:03:41.019 INFO wrote hostname ci-4081.3.6-n-f5e0a482e1 to /sysroot/etc/hostname Jan 17 00:03:41.027216 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:03:41.262220 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:03:41.294670 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:03:41.316743 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:03:41.324228 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:03:42.674561 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:03:42.688983 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:03:42.698115 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:03:42.712257 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:42.709089 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:03:42.736393 ignition[1064]: INFO : Ignition 2.19.0 Jan 17 00:03:42.736393 ignition[1064]: INFO : Stage: mount Jan 17 00:03:42.747518 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:42.747518 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:42.747518 ignition[1064]: INFO : mount: mount passed Jan 17 00:03:42.747518 ignition[1064]: INFO : Ignition finished successfully Jan 17 00:03:42.743652 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:03:42.747890 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:03:42.768664 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:03:42.783749 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:03:42.811647 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Jan 17 00:03:42.811699 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:42.817015 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:03:42.820724 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:03:42.828553 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:03:42.828927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:03:42.853545 ignition[1092]: INFO : Ignition 2.19.0 Jan 17 00:03:42.853545 ignition[1092]: INFO : Stage: files Jan 17 00:03:42.859810 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:42.859810 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:42.859810 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:03:42.859810 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:03:42.859810 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:03:42.900682 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:03:42.906568 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:03:42.906568 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:03:42.901069 unknown[1092]: wrote ssh authorized keys file for user: core Jan 17 00:03:42.922340 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:03:42.922340 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 17 00:03:42.958229 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:03:43.044790 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 17 00:03:43.600897 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:03:43.886900 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:03:43.886900 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:03:43.906911 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: files passed Jan 17 00:03:43.914619 ignition[1092]: INFO : Ignition finished successfully Jan 17 00:03:43.909499 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:03:43.943804 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:03:43.956686 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:03:43.972320 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:03:43.999654 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:03:43.999654 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:03:43.972406 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:03:44.026564 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:03:43.995749 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:03:44.004922 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:03:44.033790 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:03:44.069337 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:03:44.069467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:03:44.079119 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:03:44.088043 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:03:44.097149 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:03:44.107366 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:03:44.127648 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:03:44.140761 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:03:44.159007 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:03:44.163981 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:03:44.173767 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:03:44.182106 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:03:44.182231 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:03:44.194538 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:03:44.199066 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:03:44.207846 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:03:44.216413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:03:44.225070 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:03:44.234231 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:03:44.243145 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:03:44.253104 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:03:44.261429 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:03:44.271024 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:03:44.278443 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:03:44.278573 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:03:44.289928 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:03:44.294687 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:03:44.303782 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:03:44.305545 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:03:44.313209 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:03:44.313324 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:03:44.326521 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:03:44.326660 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:03:44.336067 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:03:44.336158 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:03:44.346232 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:03:44.346324 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:03:44.378788 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:03:44.400416 ignition[1145]: INFO : Ignition 2.19.0 Jan 17 00:03:44.400416 ignition[1145]: INFO : Stage: umount Jan 17 00:03:44.400416 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:44.400416 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:44.400416 ignition[1145]: INFO : umount: umount passed Jan 17 00:03:44.400416 ignition[1145]: INFO : Ignition finished successfully Jan 17 00:03:44.390731 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:03:44.390888 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:03:44.402644 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:03:44.409756 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:03:44.411629 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:03:44.419353 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:03:44.419518 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:03:44.433874 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:03:44.433963 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:03:44.440240 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:03:44.441772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:03:44.448902 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:03:44.450165 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:03:44.450217 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:03:44.455895 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:03:44.455941 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:03:44.464828 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:03:44.464868 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:03:44.474210 systemd[1]: Stopped target network.target - Network. Jan 17 00:03:44.481520 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:03:44.481577 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:03:44.490495 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:03:44.498379 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:03:44.502362 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:03:44.507792 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:03:44.516991 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:03:44.524788 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:03:44.524839 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:03:44.536587 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:03:44.536631 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:03:44.545637 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:03:44.545683 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:03:44.553504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:03:44.553544 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:03:44.561906 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:03:44.573862 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:03:44.577580 systemd-networkd[904]: eth0: DHCPv6 lease lost Jan 17 00:03:44.586176 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:03:44.586286 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:03:44.595750 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:03:44.745706 kernel: hv_netvsc 0022487b-8619-0022-487b-86190022487b eth0: Data path switched from VF: enP6208s1 Jan 17 00:03:44.597559 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:03:44.606083 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:03:44.606129 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:03:44.627692 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:03:44.637730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:03:44.637796 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:03:44.646574 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:03:44.646614 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:03:44.654597 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:03:44.654632 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:03:44.662922 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:03:44.662965 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:03:44.671937 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:03:44.718162 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:03:44.718291 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:03:44.728264 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:03:44.728350 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:03:44.742393 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:03:44.742454 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:03:44.750208 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:03:44.750248 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:03:44.758773 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:03:44.758828 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:03:44.771359 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:03:44.771411 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:03:44.784229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:03:44.784283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:03:44.798522 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:03:44.798572 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:03:44.825726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:03:44.836975 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:03:44.837035 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:03:44.848710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:03:44.848753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:44.858788 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:03:44.858893 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:03:44.866647 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:03:44.866720 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:03:44.875870 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:03:44.900155 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:03:45.049478 systemd[1]: Switching root. Jan 17 00:03:45.110226 systemd-journald[217]: Journal stopped Jan 17 00:03:35.188103 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:03:35.188123 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:03:35.188131 kernel: KASLR enabled Jan 17 00:03:35.188137 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:03:35.188144 kernel: printk: bootconsole [pl11] enabled Jan 17 00:03:35.188150 kernel: efi: EFI v2.7 by EDK II Jan 17 00:03:35.188157 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:03:35.188163 kernel: random: crng init done Jan 17 00:03:35.188169 kernel: ACPI: Early table checksum verification disabled Jan 17 00:03:35.188175 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:03:35.188181 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188187 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188194 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:03:35.188201 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188208 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188214 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188221 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188229 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188236 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188242 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:03:35.188249 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:03:35.188255 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:03:35.188262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:03:35.188268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:03:35.188274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:03:35.188281 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:03:35.188287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:03:35.188293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:03:35.188301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:03:35.188308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:03:35.188314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:03:35.188320 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:03:35.188327 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:03:35.188333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:03:35.188339 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 17 00:03:35.188346 kernel: Zone ranges: Jan 17 00:03:35.188352 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:03:35.188359 kernel: DMA32 empty Jan 17 00:03:35.188365 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:03:35.188372 kernel: Movable zone start for each node Jan 17 00:03:35.188382 kernel: Early memory node ranges Jan 17 00:03:35.188389 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:03:35.188396 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:03:35.188403 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:03:35.188410 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:03:35.188418 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:03:35.188425 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:03:35.188431 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:03:35.188438 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:03:35.188445 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:03:35.188452 kernel: psci: probing for conduit method from ACPI. Jan 17 00:03:35.188459 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:03:35.188465 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:03:35.188472 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:03:35.188479 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:03:35.189649 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:03:35.189660 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:03:35.189672 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:03:35.189679 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:03:35.189686 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:03:35.189693 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:03:35.189699 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:03:35.189706 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:03:35.189713 kernel: CPU features: detected: Spectre-BHB Jan 17 00:03:35.189720 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:03:35.189727 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:03:35.189734 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:03:35.189741 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:03:35.189749 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:03:35.189756 kernel: alternatives: applying boot alternatives Jan 17 00:03:35.189764 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:03:35.189771 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:03:35.189778 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:03:35.189785 kernel: Fallback order for Node 0: 0 Jan 17 00:03:35.189792 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:03:35.189798 kernel: Policy zone: Normal Jan 17 00:03:35.189805 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:03:35.189812 kernel: software IO TLB: area num 2. Jan 17 00:03:35.189819 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:03:35.189827 kernel: Memory: 3982632K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211528K reserved, 0K cma-reserved) Jan 17 00:03:35.189834 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:03:35.189841 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:03:35.189848 kernel: rcu: RCU event tracing is enabled. Jan 17 00:03:35.189855 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:03:35.189862 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:03:35.189869 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:03:35.189876 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:03:35.189883 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:03:35.189889 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:03:35.189896 kernel: GICv3: 960 SPIs implemented Jan 17 00:03:35.189904 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:03:35.189911 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:03:35.189918 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:03:35.189925 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:03:35.189931 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:03:35.189938 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:03:35.189945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:03:35.189952 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:03:35.189959 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:03:35.189965 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:03:35.189973 kernel: Console: colour dummy device 80x25 Jan 17 00:03:35.189981 kernel: printk: console [tty1] enabled Jan 17 00:03:35.189988 kernel: ACPI: Core revision 20230628 Jan 17 00:03:35.189995 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:03:35.190002 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:03:35.190009 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:03:35.190016 kernel: landlock: Up and running. Jan 17 00:03:35.190023 kernel: SELinux: Initializing. Jan 17 00:03:35.190030 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:03:35.190037 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:03:35.190045 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:03:35.190052 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:03:35.190060 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:03:35.190067 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:03:35.190074 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:03:35.190081 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:03:35.190088 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:03:35.190095 kernel: Remapping and enabling EFI services. Jan 17 00:03:35.190108 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:03:35.190116 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:03:35.190124 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:03:35.190131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:03:35.190139 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:03:35.190147 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:03:35.190154 kernel: SMP: Total of 2 processors activated. Jan 17 00:03:35.190162 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:03:35.190169 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:03:35.190178 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:03:35.190185 kernel: CPU features: detected: CRC32 instructions Jan 17 00:03:35.190193 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:03:35.190200 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:03:35.190207 kernel: CPU features: detected: Privileged Access Never Jan 17 00:03:35.190214 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:03:35.190222 kernel: alternatives: applying system-wide alternatives Jan 17 00:03:35.190229 kernel: devtmpfs: initialized Jan 17 00:03:35.190236 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:03:35.190245 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:03:35.190252 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:03:35.190260 kernel: SMBIOS 3.1.0 present. Jan 17 00:03:35.190267 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:03:35.190274 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:03:35.190282 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:03:35.190289 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:03:35.190297 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:03:35.190304 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:03:35.190313 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:03:35.190320 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:03:35.190328 kernel: cpuidle: using governor menu Jan 17 00:03:35.190335 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:03:35.190342 kernel: ASID allocator initialised with 32768 entries Jan 17 00:03:35.190350 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:03:35.190357 kernel: Serial: AMBA PL011 UART driver Jan 17 00:03:35.190365 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:03:35.190372 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:03:35.190380 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:03:35.190388 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:03:35.190395 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:03:35.190402 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:03:35.190410 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:03:35.190417 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:03:35.190427 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:03:35.190436 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:03:35.190444 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:03:35.190455 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:03:35.190464 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:03:35.190472 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:03:35.190480 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:03:35.190497 kernel: ACPI: Interpreter enabled Jan 17 00:03:35.190506 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:03:35.190514 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:03:35.190522 kernel: printk: console [ttyAMA0] enabled Jan 17 00:03:35.190535 kernel: printk: bootconsole [pl11] disabled Jan 17 00:03:35.190546 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:03:35.190555 kernel: iommu: Default domain type: Translated Jan 17 00:03:35.190563 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:03:35.190572 kernel: efivars: Registered efivars operations Jan 17 00:03:35.190580 kernel: vgaarb: loaded Jan 17 00:03:35.190588 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:03:35.190595 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:03:35.190602 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:03:35.190609 kernel: pnp: PnP ACPI init Jan 17 00:03:35.190618 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:03:35.190626 kernel: NET: Registered PF_INET protocol family Jan 17 00:03:35.190635 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:03:35.190644 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:03:35.190652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:03:35.190661 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:03:35.190669 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:03:35.190678 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:03:35.190686 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:03:35.190697 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:03:35.190705 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:03:35.190713 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:03:35.190722 kernel: kvm [1]: HYP mode not available Jan 17 00:03:35.190729 kernel: Initialise system trusted keyrings Jan 17 00:03:35.190737 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:03:35.190744 kernel: Key type asymmetric registered Jan 17 00:03:35.190751 kernel: Asymmetric key parser 'x509' registered Jan 17 00:03:35.190758 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:03:35.190767 kernel: io scheduler mq-deadline registered Jan 17 00:03:35.190774 kernel: io scheduler kyber registered Jan 17 00:03:35.190782 kernel: io scheduler bfq registered Jan 17 00:03:35.190789 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:03:35.190796 kernel: thunder_xcv, ver 1.0 Jan 17 00:03:35.190803 kernel: thunder_bgx, ver 1.0 Jan 17 00:03:35.190810 kernel: nicpf, ver 1.0 Jan 17 00:03:35.190817 kernel: nicvf, ver 1.0 Jan 17 00:03:35.190952 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:03:35.191026 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:03:34 UTC (1768608214) Jan 17 00:03:35.191037 kernel: efifb: probing for efifb Jan 17 00:03:35.191044 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:03:35.191052 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:03:35.191059 kernel: efifb: scrolling: redraw Jan 17 00:03:35.191066 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:03:35.191074 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:03:35.191081 kernel: fb0: EFI VGA frame buffer device Jan 17 00:03:35.191091 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:03:35.191099 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:03:35.191106 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:03:35.191113 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:03:35.191121 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:03:35.191128 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:03:35.191135 kernel: Segment Routing with IPv6 Jan 17 00:03:35.191143 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:03:35.191150 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:03:35.191159 kernel: Key type dns_resolver registered Jan 17 00:03:35.191166 kernel: registered taskstats version 1 Jan 17 00:03:35.191173 kernel: Loading compiled-in X.509 certificates Jan 17 00:03:35.191181 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:03:35.191188 kernel: Key type .fscrypt registered Jan 17 00:03:35.191195 kernel: Key type fscrypt-provisioning registered Jan 17 00:03:35.191202 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:03:35.191209 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:03:35.191217 kernel: ima: No architecture policies found Jan 17 00:03:35.191226 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:03:35.191233 kernel: clk: Disabling unused clocks Jan 17 00:03:35.191240 kernel: Freeing unused kernel memory: 39424K Jan 17 00:03:35.191248 kernel: Run /init as init process Jan 17 00:03:35.191255 kernel: with arguments: Jan 17 00:03:35.191262 kernel: /init Jan 17 00:03:35.191269 kernel: with environment: Jan 17 00:03:35.191276 kernel: HOME=/ Jan 17 00:03:35.191284 kernel: TERM=linux Jan 17 00:03:35.191293 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:03:35.191304 systemd[1]: Detected virtualization microsoft. Jan 17 00:03:35.191312 systemd[1]: Detected architecture arm64. Jan 17 00:03:35.191320 systemd[1]: Running in initrd. Jan 17 00:03:35.191328 systemd[1]: No hostname configured, using default hostname. Jan 17 00:03:35.191335 systemd[1]: Hostname set to . Jan 17 00:03:35.191343 systemd[1]: Initializing machine ID from random generator. Jan 17 00:03:35.191353 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:03:35.191361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:03:35.191369 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:03:35.191378 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:03:35.191386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:03:35.191394 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:03:35.191402 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:03:35.191412 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:03:35.191421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:03:35.191429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:03:35.191437 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:03:35.191445 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:03:35.191453 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:03:35.191461 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:03:35.191469 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:03:35.191477 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:03:35.195076 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:03:35.195093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:03:35.195101 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:03:35.195110 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:03:35.195118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:03:35.195127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:03:35.195135 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:03:35.195143 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:03:35.195158 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:03:35.195166 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:03:35.195174 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:03:35.195182 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:03:35.195190 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:03:35.195226 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 00:03:35.195248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:35.195257 systemd-journald[217]: Journal started Jan 17 00:03:35.195276 systemd-journald[217]: Runtime Journal (/run/log/journal/8aea3ac1837a416c9436dabba461290e) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:03:35.195877 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 00:03:35.212612 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:03:35.222499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:03:35.224523 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:03:35.236182 kernel: Bridge firewalling registered Jan 17 00:03:35.231290 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 00:03:35.232149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:03:35.241872 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:03:35.248718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:03:35.258194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:35.276828 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:03:35.288772 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:03:35.299775 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:03:35.326748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:03:35.333509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:03:35.343178 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:03:35.348207 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:03:35.360241 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:03:35.381666 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:03:35.388628 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:03:35.404285 dracut-cmdline[250]: dracut-dracut-053 Jan 17 00:03:35.411645 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:03:35.409784 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:03:35.446005 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:03:35.476759 systemd-resolved[254]: Positive Trust Anchors: Jan 17 00:03:35.476772 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:03:35.476804 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:03:35.482107 systemd-resolved[254]: Defaulting to hostname 'linux'. Jan 17 00:03:35.482978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:03:35.494293 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:03:35.543499 kernel: SCSI subsystem initialized Jan 17 00:03:35.550497 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:03:35.560506 kernel: iscsi: registered transport (tcp) Jan 17 00:03:35.574492 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:03:35.574512 kernel: QLogic iSCSI HBA Driver Jan 17 00:03:35.608753 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:03:35.619690 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:03:35.651796 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:03:35.651858 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:03:35.656665 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:03:35.703512 kernel: raid6: neonx8 gen() 15807 MB/s Jan 17 00:03:35.722492 kernel: raid6: neonx4 gen() 15691 MB/s Jan 17 00:03:35.741493 kernel: raid6: neonx2 gen() 13264 MB/s Jan 17 00:03:35.761490 kernel: raid6: neonx1 gen() 10562 MB/s Jan 17 00:03:35.780489 kernel: raid6: int64x8 gen() 6978 MB/s Jan 17 00:03:35.799489 kernel: raid6: int64x4 gen() 7353 MB/s Jan 17 00:03:35.819489 kernel: raid6: int64x2 gen() 6147 MB/s Jan 17 00:03:35.841605 kernel: raid6: int64x1 gen() 5072 MB/s Jan 17 00:03:35.841624 kernel: raid6: using algorithm neonx8 gen() 15807 MB/s Jan 17 00:03:35.863938 kernel: raid6: .... xor() 11952 MB/s, rmw enabled Jan 17 00:03:35.863958 kernel: raid6: using neon recovery algorithm Jan 17 00:03:35.874632 kernel: xor: measuring software checksum speed Jan 17 00:03:35.874647 kernel: 8regs : 19769 MB/sec Jan 17 00:03:35.877582 kernel: 32regs : 19679 MB/sec Jan 17 00:03:35.880333 kernel: arm64_neon : 27186 MB/sec Jan 17 00:03:35.883528 kernel: xor: using function: arm64_neon (27186 MB/sec) Jan 17 00:03:35.932505 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:03:35.943111 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:03:35.955632 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:03:35.974572 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jan 17 00:03:35.979044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:03:36.031689 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:03:36.051555 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 17 00:03:36.078458 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:03:36.093896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:03:36.130267 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:03:36.146720 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:03:36.168142 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:03:36.177827 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:03:36.188719 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:03:36.204398 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:03:36.230667 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:03:36.255646 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:03:36.275086 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:03:36.275107 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:03:36.277265 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:03:36.277435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:03:36.349771 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:03:36.349796 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:03:36.349807 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 00:03:36.349817 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:03:36.349826 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:03:36.349844 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 00:03:36.349853 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:03:36.350013 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:03:36.295850 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:03:36.367299 kernel: scsi host1: storvsc_host_t Jan 17 00:03:36.367470 kernel: scsi host0: storvsc_host_t Jan 17 00:03:36.367593 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:03:36.312523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:03:36.379867 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:03:36.312735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:36.391943 kernel: PTP clock support registered Jan 17 00:03:36.360719 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:36.409069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:36.289636 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:03:36.294740 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:03:36.294754 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:03:36.294762 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:03:36.294771 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:03:36.294779 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:03:36.294907 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:03:36.294916 systemd-journald[217]: Time jumped backwards, rotating. Jan 17 00:03:36.294952 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:03:36.425682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:03:36.311178 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:03:36.311356 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:03:36.311445 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:03:36.425777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:36.342702 kernel: hv_netvsc 0022487b-8619-0022-487b-86190022487b eth0: VF slot 1 added Jan 17 00:03:36.342915 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:03:36.343030 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:03:36.343118 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:36.343127 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:03:36.276846 systemd-resolved[254]: Clock change detected. Flushing caches. Jan 17 00:03:36.370298 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:03:36.370318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#255 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:03:36.325049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:36.384340 kernel: hv_pci cd10f029-1840-48df-a83d-bf82dd7b6dec: PCI VMBus probing: Using version 0x10004 Jan 17 00:03:36.365746 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:36.399654 kernel: hv_pci cd10f029-1840-48df-a83d-bf82dd7b6dec: PCI host bridge to bus 1840:00 Jan 17 00:03:36.399802 kernel: pci_bus 1840:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:03:36.385724 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:03:36.414698 kernel: pci_bus 1840:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:03:36.414816 kernel: pci 1840:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:03:36.432582 kernel: pci 1840:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:03:36.444769 kernel: pci 1840:00:02.0: enabling Extended Tags Jan 17 00:03:36.444829 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#199 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:03:36.450673 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:03:36.484036 kernel: pci 1840:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1840:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:03:36.484205 kernel: pci_bus 1840:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:03:36.484296 kernel: pci 1840:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:03:36.522990 kernel: mlx5_core 1840:00:02.0: enabling device (0000 -> 0002) Jan 17 00:03:36.528541 kernel: mlx5_core 1840:00:02.0: firmware version: 16.30.5026 Jan 17 00:03:36.729213 kernel: hv_netvsc 0022487b-8619-0022-487b-86190022487b eth0: VF registering: eth1 Jan 17 00:03:36.729418 kernel: mlx5_core 1840:00:02.0 eth1: joined to eth0 Jan 17 00:03:36.734808 kernel: mlx5_core 1840:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:03:36.744541 kernel: mlx5_core 1840:00:02.0 enP6208s1: renamed from eth1 Jan 17 00:03:36.839567 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:03:36.883654 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (487) Jan 17 00:03:36.897589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:03:36.931234 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:03:36.958545 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (491) Jan 17 00:03:36.970104 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:03:36.975828 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:03:37.007755 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:03:37.029627 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:37.037548 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:37.044538 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:38.055145 disk-uuid[607]: The operation has completed successfully. Jan 17 00:03:38.060279 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:03:38.126661 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:03:38.130692 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:03:38.153695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:03:38.165130 sh[720]: Success Jan 17 00:03:38.189547 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:03:38.441026 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:03:38.449648 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:03:38.457028 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:03:38.497423 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:03:38.497474 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:03:38.503735 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:03:38.508296 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:03:38.512081 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:03:38.817664 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:03:38.822162 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:03:38.843806 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:03:38.853092 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:03:38.884191 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:38.884254 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:03:38.887732 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:03:38.925551 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:03:38.933561 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:03:38.944977 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:38.954375 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:03:38.964683 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:03:38.980754 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:03:38.992765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:03:39.030455 systemd-networkd[904]: lo: Link UP Jan 17 00:03:39.030466 systemd-networkd[904]: lo: Gained carrier Jan 17 00:03:39.032035 systemd-networkd[904]: Enumeration completed Jan 17 00:03:39.033879 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:03:39.033883 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:03:39.036620 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:03:39.042233 systemd[1]: Reached target network.target - Network. Jan 17 00:03:39.120545 kernel: mlx5_core 1840:00:02.0 enP6208s1: Link up Jan 17 00:03:39.158551 kernel: hv_netvsc 0022487b-8619-0022-487b-86190022487b eth0: Data path switched to VF: enP6208s1 Jan 17 00:03:39.159581 systemd-networkd[904]: enP6208s1: Link UP Jan 17 00:03:39.159676 systemd-networkd[904]: eth0: Link UP Jan 17 00:03:39.159772 systemd-networkd[904]: eth0: Gained carrier Jan 17 00:03:39.159780 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:03:39.181176 systemd-networkd[904]: enP6208s1: Gained carrier Jan 17 00:03:39.194560 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:03:39.918799 ignition[903]: Ignition 2.19.0 Jan 17 00:03:39.918808 ignition[903]: Stage: fetch-offline Jan 17 00:03:39.922891 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:03:39.918843 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:39.918851 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:39.918941 ignition[903]: parsed url from cmdline: "" Jan 17 00:03:39.940741 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:03:39.918944 ignition[903]: no config URL provided Jan 17 00:03:39.918948 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:03:39.918955 ignition[903]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:03:39.918959 ignition[903]: failed to fetch config: resource requires networking Jan 17 00:03:39.919418 ignition[903]: Ignition finished successfully Jan 17 00:03:39.965009 ignition[914]: Ignition 2.19.0 Jan 17 00:03:39.965015 ignition[914]: Stage: fetch Jan 17 00:03:39.965211 ignition[914]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:39.965222 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:39.965322 ignition[914]: parsed url from cmdline: "" Jan 17 00:03:39.965326 ignition[914]: no config URL provided Jan 17 00:03:39.965331 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:03:39.965338 ignition[914]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:03:39.965362 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:03:40.074862 ignition[914]: GET result: OK Jan 17 00:03:40.074948 ignition[914]: config has been read from IMDS userdata Jan 17 00:03:40.075031 ignition[914]: parsing config with SHA512: 943086293b4ce6a7a73782aa06ccae527f8a3f3e693a1dda90f046d77581ad410fb18e75ef8778198ed0085757cf41bcfc0e01226ba2235f71434268b5f65cc5 Jan 17 00:03:40.078661 unknown[914]: fetched base config from "system" Jan 17 00:03:40.079034 ignition[914]: fetch: fetch complete Jan 17 00:03:40.078675 unknown[914]: fetched base config from "system" Jan 17 00:03:40.079038 ignition[914]: fetch: fetch passed Jan 17 00:03:40.078681 unknown[914]: fetched user config from "azure" Jan 17 00:03:40.079078 ignition[914]: Ignition finished successfully Jan 17 00:03:40.084177 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:03:40.102740 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:03:40.122509 ignition[921]: Ignition 2.19.0 Jan 17 00:03:40.122518 ignition[921]: Stage: kargs Jan 17 00:03:40.126556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:03:40.122703 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:40.122713 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:40.123731 ignition[921]: kargs: kargs passed Jan 17 00:03:40.123779 ignition[921]: Ignition finished successfully Jan 17 00:03:40.147718 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:03:40.163397 ignition[927]: Ignition 2.19.0 Jan 17 00:03:40.163411 ignition[927]: Stage: disks Jan 17 00:03:40.169850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:03:40.166457 ignition[927]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:40.176268 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:03:40.166468 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:40.181208 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:03:40.167340 ignition[927]: disks: disks passed Jan 17 00:03:40.190743 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:03:40.167382 ignition[927]: Ignition finished successfully Jan 17 00:03:40.198715 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:03:40.207566 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:03:40.223774 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:03:40.300647 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:03:40.309811 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:03:40.324725 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:03:40.376569 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:03:40.377758 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:03:40.381484 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:03:40.422593 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:03:40.441538 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Jan 17 00:03:40.453259 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:40.453301 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:03:40.456796 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:03:40.458653 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:03:40.465722 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:03:40.482152 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:03:40.476802 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:03:40.476841 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:03:40.501862 systemd-networkd[904]: eth0: Gained IPv6LL Jan 17 00:03:40.503357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:03:40.513371 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:03:40.524747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:03:40.983981 coreos-metadata[961]: Jan 17 00:03:40.983 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:03:40.993063 coreos-metadata[961]: Jan 17 00:03:40.993 INFO Fetch successful Jan 17 00:03:40.993063 coreos-metadata[961]: Jan 17 00:03:40.993 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:03:41.006512 coreos-metadata[961]: Jan 17 00:03:41.004 INFO Fetch successful Jan 17 00:03:41.019582 coreos-metadata[961]: Jan 17 00:03:41.019 INFO wrote hostname ci-4081.3.6-n-f5e0a482e1 to /sysroot/etc/hostname Jan 17 00:03:41.027216 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:03:41.262220 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:03:41.294670 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:03:41.316743 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:03:41.324228 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:03:42.674561 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:03:42.688983 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:03:42.698115 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:03:42.712257 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:42.709089 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:03:42.736393 ignition[1064]: INFO : Ignition 2.19.0 Jan 17 00:03:42.736393 ignition[1064]: INFO : Stage: mount Jan 17 00:03:42.747518 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:42.747518 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:42.747518 ignition[1064]: INFO : mount: mount passed Jan 17 00:03:42.747518 ignition[1064]: INFO : Ignition finished successfully Jan 17 00:03:42.743652 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:03:42.747890 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:03:42.768664 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:03:42.783749 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:03:42.811647 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Jan 17 00:03:42.811699 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:03:42.817015 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:03:42.820724 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:03:42.828553 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:03:42.828927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:03:42.853545 ignition[1092]: INFO : Ignition 2.19.0 Jan 17 00:03:42.853545 ignition[1092]: INFO : Stage: files Jan 17 00:03:42.859810 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:42.859810 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:42.859810 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:03:42.859810 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:03:42.859810 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:03:42.900682 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:03:42.906568 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:03:42.906568 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:03:42.901069 unknown[1092]: wrote ssh authorized keys file for user: core Jan 17 00:03:42.922340 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:03:42.922340 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 17 00:03:42.958229 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:03:43.044790 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:03:43.053665 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 17 00:03:43.600897 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:03:43.886900 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:03:43.886900 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:03:43.906911 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:03:43.914619 ignition[1092]: INFO : files: files passed Jan 17 00:03:43.914619 ignition[1092]: INFO : Ignition finished successfully Jan 17 00:03:43.909499 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:03:43.943804 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:03:43.956686 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:03:43.972320 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:03:43.999654 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:03:43.999654 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:03:43.972406 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:03:44.026564 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:03:43.995749 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:03:44.004922 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:03:44.033790 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:03:44.069337 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:03:44.069467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:03:44.079119 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:03:44.088043 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:03:44.097149 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:03:44.107366 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:03:44.127648 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:03:44.140761 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:03:44.159007 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:03:44.163981 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:03:44.173767 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:03:44.182106 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:03:44.182231 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:03:44.194538 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:03:44.199066 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:03:44.207846 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:03:44.216413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:03:44.225070 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:03:44.234231 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:03:44.243145 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:03:44.253104 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:03:44.261429 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:03:44.271024 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:03:44.278443 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:03:44.278573 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:03:44.289928 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:03:44.294687 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:03:44.303782 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:03:44.305545 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:03:44.313209 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:03:44.313324 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:03:44.326521 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:03:44.326660 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:03:44.336067 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:03:44.336158 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:03:44.346232 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:03:44.346324 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:03:44.378788 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:03:44.400416 ignition[1145]: INFO : Ignition 2.19.0 Jan 17 00:03:44.400416 ignition[1145]: INFO : Stage: umount Jan 17 00:03:44.400416 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:03:44.400416 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:03:44.400416 ignition[1145]: INFO : umount: umount passed Jan 17 00:03:44.400416 ignition[1145]: INFO : Ignition finished successfully Jan 17 00:03:44.390731 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:03:44.390888 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:03:44.402644 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:03:44.409756 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:03:44.411629 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:03:44.419353 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:03:44.419518 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:03:44.433874 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:03:44.433963 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:03:44.440240 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:03:44.441772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:03:44.448902 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:03:44.450165 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:03:44.450217 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:03:44.455895 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:03:44.455941 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:03:44.464828 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:03:44.464868 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:03:44.474210 systemd[1]: Stopped target network.target - Network. Jan 17 00:03:44.481520 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:03:44.481577 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:03:44.490495 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:03:44.498379 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:03:44.502362 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:03:44.507792 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:03:44.516991 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:03:44.524788 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:03:44.524839 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:03:44.536587 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:03:44.536631 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:03:44.545637 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:03:44.545683 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:03:44.553504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:03:44.553544 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:03:44.561906 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:03:44.573862 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:03:44.577580 systemd-networkd[904]: eth0: DHCPv6 lease lost Jan 17 00:03:44.586176 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:03:44.586286 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:03:44.595750 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:03:44.745706 kernel: hv_netvsc 0022487b-8619-0022-487b-86190022487b eth0: Data path switched from VF: enP6208s1 Jan 17 00:03:44.597559 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:03:44.606083 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:03:44.606129 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:03:44.627692 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:03:44.637730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:03:44.637796 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:03:44.646574 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:03:44.646614 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:03:44.654597 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:03:44.654632 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:03:44.662922 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:03:44.662965 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:03:44.671937 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:03:44.718162 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:03:44.718291 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:03:44.728264 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:03:44.728350 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:03:44.742393 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:03:44.742454 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:03:44.750208 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:03:44.750248 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:03:44.758773 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:03:44.758828 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:03:44.771359 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:03:44.771411 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:03:44.784229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:03:44.784283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:03:44.798522 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:03:44.798572 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:03:44.825726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:03:44.836975 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:03:44.837035 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:03:44.848710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:03:44.848753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:44.858788 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:03:44.858893 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:03:44.866647 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:03:44.866720 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:03:44.875870 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:03:44.900155 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:03:45.049478 systemd[1]: Switching root. Jan 17 00:03:45.110226 systemd-journald[217]: Journal stopped Jan 17 00:03:54.138043 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 17 00:03:54.138068 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:03:54.138079 kernel: SELinux: policy capability open_perms=1 Jan 17 00:03:54.138089 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:03:54.138097 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:03:54.138105 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:03:54.138113 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:03:54.138122 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:03:54.138129 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:03:54.138138 systemd[1]: Successfully loaded SELinux policy in 162.525ms. Jan 17 00:03:54.138148 kernel: audit: type=1403 audit(1768608230.763:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:03:54.138157 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.212ms. Jan 17 00:03:54.138167 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:03:54.138177 systemd[1]: Detected virtualization microsoft. Jan 17 00:03:54.138186 systemd[1]: Detected architecture arm64. Jan 17 00:03:54.138196 systemd[1]: Detected first boot. Jan 17 00:03:54.138206 systemd[1]: Hostname set to . Jan 17 00:03:54.138215 systemd[1]: Initializing machine ID from random generator. Jan 17 00:03:54.138224 zram_generator::config[1187]: No configuration found. Jan 17 00:03:54.138234 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:03:54.138243 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:03:54.138254 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:03:54.138263 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:03:54.138276 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:03:54.138285 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:03:54.138295 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:03:54.138304 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:03:54.138313 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:03:54.138324 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:03:54.138334 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:03:54.138343 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:03:54.138352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:03:54.138362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:03:54.138371 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:03:54.138380 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:03:54.138390 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:03:54.138399 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:03:54.138410 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 00:03:54.138419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:03:54.138428 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:03:54.138440 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:03:54.138450 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:03:54.138459 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:03:54.138468 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:03:54.138481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:03:54.138491 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:03:54.138500 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:03:54.138509 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:03:54.138519 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:03:54.138538 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:03:54.138549 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:03:54.138562 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:03:54.138572 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:03:54.138581 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:03:54.138591 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:03:54.138600 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:03:54.138610 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:03:54.138621 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:03:54.138631 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:03:54.138641 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:03:54.138651 systemd[1]: Reached target machines.target - Containers. Jan 17 00:03:54.138660 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:03:54.138670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:03:54.138680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:03:54.138689 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:03:54.138702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:03:54.138712 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:03:54.138721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:03:54.138731 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:03:54.138740 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:03:54.138750 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:03:54.138760 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:03:54.138770 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:03:54.138779 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:03:54.138791 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:03:54.138800 kernel: fuse: init (API version 7.39) Jan 17 00:03:54.138809 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:03:54.138818 kernel: loop: module loaded Jan 17 00:03:54.138827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:03:54.138836 kernel: ACPI: bus type drm_connector registered Jan 17 00:03:54.138845 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:03:54.138855 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:03:54.138864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:03:54.138876 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:03:54.138900 systemd-journald[1276]: Collecting audit messages is disabled. Jan 17 00:03:54.138920 systemd[1]: Stopped verity-setup.service. Jan 17 00:03:54.138931 systemd-journald[1276]: Journal started Jan 17 00:03:54.138953 systemd-journald[1276]: Runtime Journal (/run/log/journal/ce9905b2ba924cd8a43a486df5c0da6b) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:03:53.311591 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:03:53.437098 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:03:53.437440 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:03:53.437760 systemd[1]: systemd-journald.service: Consumed 2.568s CPU time. Jan 17 00:03:54.142686 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:03:54.151395 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:03:54.155890 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:03:54.160626 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:03:54.165165 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:03:54.170046 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:03:54.174948 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:03:54.179959 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:03:54.185189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:03:54.190735 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:03:54.190865 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:03:54.197951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:03:54.198091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:03:54.203172 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:03:54.203305 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:03:54.208289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:03:54.208408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:03:54.214861 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:03:54.214985 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:03:54.219922 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:03:54.220044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:03:54.225173 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:03:54.230199 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:03:54.235662 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:03:54.241501 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:03:54.254958 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:03:54.265604 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:03:54.271401 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:03:54.276397 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:03:54.276430 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:03:54.281927 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:03:54.288945 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:03:54.294829 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:03:54.299250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:03:54.300792 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:03:54.308753 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:03:54.313661 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:03:54.315877 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:03:54.324618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:03:54.325706 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:03:54.331311 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:03:54.341134 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:03:54.349688 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:03:54.358223 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:03:54.360714 systemd-journald[1276]: Time spent on flushing to /var/log/journal/ce9905b2ba924cd8a43a486df5c0da6b is 58.408ms for 895 entries. Jan 17 00:03:54.360714 systemd-journald[1276]: System Journal (/var/log/journal/ce9905b2ba924cd8a43a486df5c0da6b) is 11.8M, max 2.6G, 2.6G free. Jan 17 00:03:54.483497 systemd-journald[1276]: Received client request to flush runtime journal. Jan 17 00:03:54.486690 systemd-journald[1276]: /var/log/journal/ce9905b2ba924cd8a43a486df5c0da6b/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 17 00:03:54.486817 systemd-journald[1276]: Rotating system journal. Jan 17 00:03:54.486867 kernel: loop0: detected capacity change from 0 to 114328 Jan 17 00:03:54.369424 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:03:54.378704 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:03:54.387131 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:03:54.411760 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:03:54.426602 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:03:54.432280 udevadm[1324]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:03:54.461582 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:03:54.470275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:03:54.484665 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:03:54.490818 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:03:54.505192 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:03:54.505856 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:03:54.606975 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jan 17 00:03:54.606993 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jan 17 00:03:54.611610 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:03:54.819555 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:03:54.841674 kernel: loop1: detected capacity change from 0 to 211168 Jan 17 00:03:54.899561 kernel: loop2: detected capacity change from 0 to 31320 Jan 17 00:03:54.991876 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:03:55.009734 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:03:55.027748 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Jan 17 00:03:55.140456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:03:55.155748 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:03:55.187187 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 00:03:55.204733 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:03:55.255578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#243 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:03:55.267134 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:03:55.316561 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:03:55.340552 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 00:03:55.362274 kernel: hv_vmbus: registering driver hv_balloon Jan 17 00:03:55.362378 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 00:03:55.366383 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 17 00:03:55.376453 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 00:03:55.376518 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 00:03:55.381614 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 00:03:55.385540 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:03:55.392347 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:03:55.400385 systemd-networkd[1356]: lo: Link UP Jan 17 00:03:55.400392 systemd-networkd[1356]: lo: Gained carrier Jan 17 00:03:55.402138 systemd-networkd[1356]: Enumeration completed Jan 17 00:03:55.402710 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:03:55.402717 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:03:55.403072 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:03:55.413754 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:03:55.421787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:55.430046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:03:55.431579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:55.444674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:03:55.454618 kernel: mlx5_core 1840:00:02.0 enP6208s1: Link up Jan 17 00:03:55.471690 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1351) Jan 17 00:03:55.483557 kernel: hv_netvsc 0022487b-8619-0022-487b-86190022487b eth0: Data path switched to VF: enP6208s1 Jan 17 00:03:55.484236 systemd-networkd[1356]: enP6208s1: Link UP Jan 17 00:03:55.484344 systemd-networkd[1356]: eth0: Link UP Jan 17 00:03:55.484347 systemd-networkd[1356]: eth0: Gained carrier Jan 17 00:03:55.484360 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:03:55.488762 systemd-networkd[1356]: enP6208s1: Gained carrier Jan 17 00:03:55.497641 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:03:55.524878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:03:55.536706 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:03:55.581557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:03:55.703558 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:03:55.714759 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 00:03:55.717955 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:03:55.729554 kernel: loop5: detected capacity change from 0 to 211168 Jan 17 00:03:55.745550 kernel: loop6: detected capacity change from 0 to 31320 Jan 17 00:03:55.757553 kernel: loop7: detected capacity change from 0 to 114432 Jan 17 00:03:55.764428 (sd-merge)[1442]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 00:03:55.764869 (sd-merge)[1442]: Merged extensions into '/usr'. Jan 17 00:03:55.768770 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:03:55.768871 systemd[1]: Reloading... Jan 17 00:03:55.831557 zram_generator::config[1474]: No configuration found. Jan 17 00:03:55.880568 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:03:55.966362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:03:56.040294 systemd[1]: Reloading finished in 270 ms. Jan 17 00:03:56.070534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:03:56.076640 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:03:56.082391 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:03:56.092206 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:03:56.102642 systemd[1]: Starting ensure-sysext.service... Jan 17 00:03:56.108647 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:03:56.116580 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:03:56.126432 lvm[1533]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:03:56.136595 systemd[1]: Reloading requested from client PID 1532 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:03:56.136611 systemd[1]: Reloading... Jan 17 00:03:56.169660 systemd-tmpfiles[1534]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:03:56.170298 systemd-tmpfiles[1534]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:03:56.171150 systemd-tmpfiles[1534]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:03:56.171506 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Jan 17 00:03:56.171641 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Jan 17 00:03:56.176133 systemd-tmpfiles[1534]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:03:56.176145 systemd-tmpfiles[1534]: Skipping /boot Jan 17 00:03:56.186103 systemd-tmpfiles[1534]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:03:56.186116 systemd-tmpfiles[1534]: Skipping /boot Jan 17 00:03:56.219585 zram_generator::config[1564]: No configuration found. Jan 17 00:03:56.321973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:03:56.397308 systemd[1]: Reloading finished in 260 ms. Jan 17 00:03:56.415548 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:03:56.427610 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:03:56.448107 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:03:56.455774 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:03:56.463418 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:03:56.476790 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:03:56.483101 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:03:56.493070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:03:56.496622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:03:56.506693 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:03:56.513775 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:03:56.520798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:03:56.521503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:03:56.523565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:03:56.530124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:03:56.530345 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:03:56.544060 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:03:56.544219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:03:56.550657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:03:56.557745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:03:56.565783 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:03:56.573967 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:03:56.583975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:03:56.584164 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:03:56.590201 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:03:56.596951 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:03:56.603144 systemd-resolved[1632]: Positive Trust Anchors: Jan 17 00:03:56.603158 systemd-resolved[1632]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:03:56.603190 systemd-resolved[1632]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:03:56.604702 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:03:56.604882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:03:56.610279 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:03:56.610410 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:03:56.615482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:03:56.615657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:03:56.620624 systemd-resolved[1632]: Using system hostname 'ci-4081.3.6-n-f5e0a482e1'. Jan 17 00:03:56.624361 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:03:56.629763 systemd[1]: Finished ensure-sysext.service. Jan 17 00:03:56.637949 systemd[1]: Reached target network.target - Network. Jan 17 00:03:56.642067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:03:56.646870 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:03:56.646936 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:03:56.650509 augenrules[1652]: No rules Jan 17 00:03:56.651835 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:03:56.949643 systemd-networkd[1356]: eth0: Gained IPv6LL Jan 17 00:03:56.955154 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:03:56.960858 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:03:57.099390 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:03:57.105362 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:03:59.688262 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:03:59.701359 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:03:59.712730 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:03:59.720810 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:03:59.726087 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:03:59.730909 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:03:59.736414 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:03:59.741973 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:03:59.746582 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:03:59.751711 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:03:59.756768 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:03:59.756798 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:03:59.760423 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:03:59.764954 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:03:59.770792 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:03:59.779028 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:03:59.783776 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:03:59.788249 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:03:59.792219 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:03:59.796438 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:03:59.796462 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:03:59.802605 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 00:03:59.807393 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:03:59.814703 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:03:59.827619 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:03:59.834162 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:03:59.834307 (chronyd)[1671]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 00:03:59.839623 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:03:59.844037 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:03:59.844072 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 00:03:59.845726 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 00:03:59.850029 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 00:03:59.852653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:03:59.855572 KVP[1679]: KVP starting; pid is:1679 Jan 17 00:03:59.858724 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:03:59.865602 jq[1677]: false Jan 17 00:03:59.866743 chronyd[1684]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 00:03:59.873726 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:03:59.880660 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:03:59.893722 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:03:59.899922 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:03:59.906755 chronyd[1684]: Timezone right/UTC failed leap second check, ignoring Jan 17 00:03:59.907899 chronyd[1684]: Loaded seccomp filter (level 2) Jan 17 00:03:59.913392 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:03:59.919668 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:03:59.921022 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:03:59.922259 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:03:59.926308 extend-filesystems[1678]: Found loop4 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found loop5 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found loop6 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found loop7 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found sda Jan 17 00:03:59.926308 extend-filesystems[1678]: Found sda1 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found sda2 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found sda3 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found usr Jan 17 00:03:59.926308 extend-filesystems[1678]: Found sda4 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found sda6 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found sda7 Jan 17 00:03:59.926308 extend-filesystems[1678]: Found sda9 Jan 17 00:03:59.926308 extend-filesystems[1678]: Checking size of /dev/sda9 Jan 17 00:04:00.096631 kernel: hv_utils: KVP IC version 4.0 Jan 17 00:04:00.096692 coreos-metadata[1673]: Jan 17 00:04:00.067 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:04:00.096692 coreos-metadata[1673]: Jan 17 00:04:00.072 INFO Fetch successful Jan 17 00:04:00.096692 coreos-metadata[1673]: Jan 17 00:04:00.073 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 00:04:00.096692 coreos-metadata[1673]: Jan 17 00:04:00.078 INFO Fetch successful Jan 17 00:04:00.096692 coreos-metadata[1673]: Jan 17 00:04:00.079 INFO Fetching http://168.63.129.16/machine/13691783-f59b-4764-a619-232577e1eb81/46ddb651%2D3173%2D425e%2D9894%2Dc5837ed84c2e.%5Fci%2D4081.3.6%2Dn%2Df5e0a482e1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 00:04:00.096692 coreos-metadata[1673]: Jan 17 00:04:00.086 INFO Fetch successful Jan 17 00:04:00.096692 coreos-metadata[1673]: Jan 17 00:04:00.086 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:03:59.944168 KVP[1679]: KVP LIC Version: 3.1 Jan 17 00:04:00.102238 extend-filesystems[1678]: Old size kept for /dev/sda9 Jan 17 00:04:00.102238 extend-filesystems[1678]: Found sr0 Jan 17 00:03:59.942107 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:04:00.132456 coreos-metadata[1673]: Jan 17 00:04:00.097 INFO Fetch successful Jan 17 00:04:00.132489 update_engine[1699]: I20260117 00:04:00.063170 1699 main.cc:92] Flatcar Update Engine starting Jan 17 00:04:00.132489 update_engine[1699]: I20260117 00:04:00.069902 1699 update_check_scheduler.cc:74] Next update check in 7m38s Jan 17 00:03:59.963212 dbus-daemon[1674]: [system] SELinux support is enabled Jan 17 00:03:59.953655 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 00:03:59.970814 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:04:00.139247 jq[1702]: true Jan 17 00:03:59.979387 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:03:59.979968 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:03:59.991490 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:04:00.140891 tar[1712]: linux-arm64/LICENSE Jan 17 00:04:00.140891 tar[1712]: linux-arm64/helm Jan 17 00:03:59.991855 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:03:59.999867 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:04:00.141246 jq[1714]: true Jan 17 00:04:00.014961 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:04:00.015112 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:04:00.045553 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:04:00.045599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:04:00.059040 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:04:00.059063 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:04:00.075776 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:04:00.075957 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:04:00.076264 (ntainerd)[1715]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:04:00.092221 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:04:00.117102 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:04:00.176875 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:04:00.188959 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:04:00.216324 systemd-logind[1696]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:04:00.217654 systemd-logind[1696]: New seat seat0. Jan 17 00:04:00.218263 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:04:00.254608 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1743) Jan 17 00:04:00.335410 bash[1774]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:04:00.332577 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:04:00.345133 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:04:00.469820 locksmithd[1733]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:04:00.838041 tar[1712]: linux-arm64/README.md Jan 17 00:04:00.852100 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:04:00.900642 containerd[1715]: time="2026-01-17T00:04:00.900557820Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:04:00.940451 containerd[1715]: time="2026-01-17T00:04:00.940404580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:04:00.941871 containerd[1715]: time="2026-01-17T00:04:00.941839820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:04:00.941947 containerd[1715]: time="2026-01-17T00:04:00.941933900Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:04:00.942001 containerd[1715]: time="2026-01-17T00:04:00.941988740Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:04:00.942201 containerd[1715]: time="2026-01-17T00:04:00.942184020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:04:00.942276 containerd[1715]: time="2026-01-17T00:04:00.942262820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:04:00.942394 containerd[1715]: time="2026-01-17T00:04:00.942377340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:04:00.942454 containerd[1715]: time="2026-01-17T00:04:00.942440020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:04:00.942676 containerd[1715]: time="2026-01-17T00:04:00.942656500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:04:00.942748 containerd[1715]: time="2026-01-17T00:04:00.942734780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:04:00.942799 containerd[1715]: time="2026-01-17T00:04:00.942786380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:04:00.942850 containerd[1715]: time="2026-01-17T00:04:00.942838260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:04:00.942978 containerd[1715]: time="2026-01-17T00:04:00.942963460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:04:00.943235 containerd[1715]: time="2026-01-17T00:04:00.943217740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:04:00.943398 containerd[1715]: time="2026-01-17T00:04:00.943379980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:04:00.943452 containerd[1715]: time="2026-01-17T00:04:00.943440220Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:04:00.943595 containerd[1715]: time="2026-01-17T00:04:00.943580220Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:04:00.943700 containerd[1715]: time="2026-01-17T00:04:00.943685820Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:04:00.957925 containerd[1715]: time="2026-01-17T00:04:00.957895740Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:04:00.957995 containerd[1715]: time="2026-01-17T00:04:00.957944540Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:04:00.957995 containerd[1715]: time="2026-01-17T00:04:00.957963220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:04:00.957995 containerd[1715]: time="2026-01-17T00:04:00.957979820Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:04:00.958045 containerd[1715]: time="2026-01-17T00:04:00.957995340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:04:00.958151 containerd[1715]: time="2026-01-17T00:04:00.958134260Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:04:00.958358 containerd[1715]: time="2026-01-17T00:04:00.958342780Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:04:00.958461 containerd[1715]: time="2026-01-17T00:04:00.958444700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:04:00.958488 containerd[1715]: time="2026-01-17T00:04:00.958465740Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:04:00.958488 containerd[1715]: time="2026-01-17T00:04:00.958479500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958492420Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958504980Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958517540Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958542500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958557900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958570420Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958583020Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958593780Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958613020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958626340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958637980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958655820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958668620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959536 containerd[1715]: time="2026-01-17T00:04:00.958681540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958692940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958706580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958719140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958732300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958743420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958756100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958768220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958783980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958804140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958815980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958826620Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958882220Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958899140Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:04:00.959785 containerd[1715]: time="2026-01-17T00:04:00.958909420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:04:00.960018 containerd[1715]: time="2026-01-17T00:04:00.958920780Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:04:00.960018 containerd[1715]: time="2026-01-17T00:04:00.958931300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.960018 containerd[1715]: time="2026-01-17T00:04:00.958946420Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:04:00.960018 containerd[1715]: time="2026-01-17T00:04:00.958955580Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:04:00.960018 containerd[1715]: time="2026-01-17T00:04:00.958965940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:04:00.960149 containerd[1715]: time="2026-01-17T00:04:00.959229140Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:04:00.960149 containerd[1715]: time="2026-01-17T00:04:00.959284780Z" level=info msg="Connect containerd service" Jan 17 00:04:00.960149 containerd[1715]: time="2026-01-17T00:04:00.959318100Z" level=info msg="using legacy CRI server" Jan 17 00:04:00.960149 containerd[1715]: time="2026-01-17T00:04:00.959325140Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:04:00.960149 containerd[1715]: time="2026-01-17T00:04:00.959426420Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:04:00.963679 containerd[1715]: time="2026-01-17T00:04:00.962892100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:04:00.963679 containerd[1715]: time="2026-01-17T00:04:00.963520420Z" level=info msg="Start subscribing containerd event" Jan 17 00:04:00.963679 containerd[1715]: time="2026-01-17T00:04:00.963575220Z" level=info msg="Start recovering state" Jan 17 00:04:00.963679 containerd[1715]: time="2026-01-17T00:04:00.963635980Z" level=info msg="Start event monitor" Jan 17 00:04:00.963679 containerd[1715]: time="2026-01-17T00:04:00.963646700Z" level=info msg="Start snapshots syncer" Jan 17 00:04:00.963792 containerd[1715]: time="2026-01-17T00:04:00.963655100Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:04:00.963792 containerd[1715]: time="2026-01-17T00:04:00.963749380Z" level=info msg="Start streaming server" Jan 17 00:04:00.964232 containerd[1715]: time="2026-01-17T00:04:00.964210500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:04:00.964271 containerd[1715]: time="2026-01-17T00:04:00.964253380Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:04:00.964761 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:04:00.973903 containerd[1715]: time="2026-01-17T00:04:00.973876580Z" level=info msg="containerd successfully booted in 0.074838s" Jan 17 00:04:01.094771 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:04:01.094879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:04:01.236709 sshd_keygen[1704]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:04:01.258577 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:04:01.270170 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:04:01.278412 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 00:04:01.285304 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:04:01.286432 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:04:01.301600 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:04:01.310249 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 00:04:01.319604 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:04:01.332840 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:04:01.344826 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 00:04:01.351315 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:04:01.356581 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:04:01.363588 systemd[1]: Startup finished in 621ms (kernel) + 16.038s (initrd) + 10.760s (userspace) = 27.421s. Jan 17 00:04:01.568840 kubelet[1813]: E0117 00:04:01.568791 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:04:01.571603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:04:01.571748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:04:02.021100 login[1840]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 17 00:04:02.022841 login[1841]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:02.047993 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:04:02.055421 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:04:02.057718 systemd-logind[1696]: New session 1 of user core. Jan 17 00:04:02.081222 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:04:02.093822 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:04:02.096584 (systemd)[1850]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:04:02.216147 systemd[1850]: Queued start job for default target default.target. Jan 17 00:04:02.225974 systemd[1850]: Created slice app.slice - User Application Slice. Jan 17 00:04:02.226105 systemd[1850]: Reached target paths.target - Paths. Jan 17 00:04:02.226120 systemd[1850]: Reached target timers.target - Timers. Jan 17 00:04:02.227364 systemd[1850]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:04:02.238519 systemd[1850]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:04:02.238645 systemd[1850]: Reached target sockets.target - Sockets. Jan 17 00:04:02.238660 systemd[1850]: Reached target basic.target - Basic System. Jan 17 00:04:02.238697 systemd[1850]: Reached target default.target - Main User Target. Jan 17 00:04:02.238721 systemd[1850]: Startup finished in 136ms. Jan 17 00:04:02.238802 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:04:02.245664 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:04:02.989546 waagent[1837]: 2026-01-17T00:04:02.987175Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 00:04:02.991476 waagent[1837]: 2026-01-17T00:04:02.991430Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 17 00:04:02.994850 waagent[1837]: 2026-01-17T00:04:02.994813Z INFO Daemon Daemon Python: 3.11.9 Jan 17 00:04:02.999566 waagent[1837]: 2026-01-17T00:04:02.998618Z INFO Daemon Daemon Run daemon Jan 17 00:04:03.002259 waagent[1837]: 2026-01-17T00:04:03.002166Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 17 00:04:03.009543 waagent[1837]: 2026-01-17T00:04:03.008749Z INFO Daemon Daemon Using waagent for provisioning Jan 17 00:04:03.012763 waagent[1837]: 2026-01-17T00:04:03.012721Z INFO Daemon Daemon Activate resource disk Jan 17 00:04:03.016517 waagent[1837]: 2026-01-17T00:04:03.016479Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 00:04:03.023981 login[1840]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:03.025921 waagent[1837]: 2026-01-17T00:04:03.025625Z INFO Daemon Daemon Found device: None Jan 17 00:04:03.029253 waagent[1837]: 2026-01-17T00:04:03.029209Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 00:04:03.035915 waagent[1837]: 2026-01-17T00:04:03.035863Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 00:04:03.046640 waagent[1837]: 2026-01-17T00:04:03.045859Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:04:03.046909 systemd-logind[1696]: New session 2 of user core. Jan 17 00:04:03.050311 waagent[1837]: 2026-01-17T00:04:03.050269Z INFO Daemon Daemon Running default provisioning handler Jan 17 00:04:03.063722 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:04:03.067290 waagent[1837]: 2026-01-17T00:04:03.067053Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 00:04:03.079544 waagent[1837]: 2026-01-17T00:04:03.078363Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 00:04:03.086438 waagent[1837]: 2026-01-17T00:04:03.086386Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 00:04:03.091163 waagent[1837]: 2026-01-17T00:04:03.091097Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 00:04:03.174955 waagent[1837]: 2026-01-17T00:04:03.174860Z INFO Daemon Daemon Successfully mounted dvd Jan 17 00:04:03.202229 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 00:04:03.204055 waagent[1837]: 2026-01-17T00:04:03.203990Z INFO Daemon Daemon Detect protocol endpoint Jan 17 00:04:03.207928 waagent[1837]: 2026-01-17T00:04:03.207884Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:04:03.212502 waagent[1837]: 2026-01-17T00:04:03.212460Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 00:04:03.217296 waagent[1837]: 2026-01-17T00:04:03.217260Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 00:04:03.221421 waagent[1837]: 2026-01-17T00:04:03.221384Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 00:04:03.225262 waagent[1837]: 2026-01-17T00:04:03.225227Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 00:04:03.255441 waagent[1837]: 2026-01-17T00:04:03.255367Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 00:04:03.260562 waagent[1837]: 2026-01-17T00:04:03.260518Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 00:04:03.264514 waagent[1837]: 2026-01-17T00:04:03.264481Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 00:04:03.380516 waagent[1837]: 2026-01-17T00:04:03.380410Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 00:04:03.385902 waagent[1837]: 2026-01-17T00:04:03.385845Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 00:04:03.393963 waagent[1837]: 2026-01-17T00:04:03.393915Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:04:03.413577 waagent[1837]: 2026-01-17T00:04:03.413516Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 17 00:04:03.417981 waagent[1837]: 2026-01-17T00:04:03.417939Z INFO Daemon Jan 17 00:04:03.420114 waagent[1837]: 2026-01-17T00:04:03.420077Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9a3e83b8-c29b-474e-9f26-6567a16be11c eTag: 4939193731925727774 source: Fabric] Jan 17 00:04:03.428766 waagent[1837]: 2026-01-17T00:04:03.428723Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 00:04:03.433853 waagent[1837]: 2026-01-17T00:04:03.433812Z INFO Daemon Jan 17 00:04:03.436003 waagent[1837]: 2026-01-17T00:04:03.435969Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:04:03.444995 waagent[1837]: 2026-01-17T00:04:03.444966Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 00:04:03.513931 waagent[1837]: 2026-01-17T00:04:03.513822Z INFO Daemon Downloaded certificate {'thumbprint': '2437102BED211B1006A5970D2AFE750A4787EAD5', 'hasPrivateKey': True} Jan 17 00:04:03.521339 waagent[1837]: 2026-01-17T00:04:03.521296Z INFO Daemon Fetch goal state completed Jan 17 00:04:03.531145 waagent[1837]: 2026-01-17T00:04:03.531105Z INFO Daemon Daemon Starting provisioning Jan 17 00:04:03.534976 waagent[1837]: 2026-01-17T00:04:03.534933Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 00:04:03.538426 waagent[1837]: 2026-01-17T00:04:03.538392Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-f5e0a482e1] Jan 17 00:04:03.561555 waagent[1837]: 2026-01-17T00:04:03.560974Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-f5e0a482e1] Jan 17 00:04:03.565864 waagent[1837]: 2026-01-17T00:04:03.565815Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 00:04:03.571217 waagent[1837]: 2026-01-17T00:04:03.571175Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 00:04:03.597769 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:04:03.597776 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:04:03.597802 systemd-networkd[1356]: eth0: DHCP lease lost Jan 17 00:04:03.598947 waagent[1837]: 2026-01-17T00:04:03.598881Z INFO Daemon Daemon Create user account if not exists Jan 17 00:04:03.603281 waagent[1837]: 2026-01-17T00:04:03.603242Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 00:04:03.607793 waagent[1837]: 2026-01-17T00:04:03.607757Z INFO Daemon Daemon Configure sudoer Jan 17 00:04:03.611390 waagent[1837]: 2026-01-17T00:04:03.611346Z INFO Daemon Daemon Configure sshd Jan 17 00:04:03.614940 waagent[1837]: 2026-01-17T00:04:03.614897Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 00:04:03.625102 waagent[1837]: 2026-01-17T00:04:03.625068Z INFO Daemon Daemon Deploy ssh public key. Jan 17 00:04:03.632626 systemd-networkd[1356]: eth0: DHCPv6 lease lost Jan 17 00:04:03.645563 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:04:04.774126 waagent[1837]: 2026-01-17T00:04:04.774078Z INFO Daemon Daemon Provisioning complete Jan 17 00:04:04.790349 waagent[1837]: 2026-01-17T00:04:04.790306Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 00:04:04.795044 waagent[1837]: 2026-01-17T00:04:04.795004Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 00:04:04.802608 waagent[1837]: 2026-01-17T00:04:04.802572Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 00:04:04.926520 waagent[1899]: 2026-01-17T00:04:04.926450Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 00:04:04.927414 waagent[1899]: 2026-01-17T00:04:04.926933Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 17 00:04:04.927414 waagent[1899]: 2026-01-17T00:04:04.927005Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 00:04:04.991553 waagent[1899]: 2026-01-17T00:04:04.991113Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 00:04:04.991553 waagent[1899]: 2026-01-17T00:04:04.991346Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:04:04.991553 waagent[1899]: 2026-01-17T00:04:04.991405Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:04:04.999464 waagent[1899]: 2026-01-17T00:04:04.999405Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:04:05.004855 waagent[1899]: 2026-01-17T00:04:05.004817Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 17 00:04:05.005284 waagent[1899]: 2026-01-17T00:04:05.005243Z INFO ExtHandler Jan 17 00:04:05.005347 waagent[1899]: 2026-01-17T00:04:05.005320Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 54f4223c-97b5-42d1-b18e-a5fd18817aa9 eTag: 4939193731925727774 source: Fabric] Jan 17 00:04:05.005643 waagent[1899]: 2026-01-17T00:04:05.005604Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:04:05.006196 waagent[1899]: 2026-01-17T00:04:05.006153Z INFO ExtHandler Jan 17 00:04:05.006257 waagent[1899]: 2026-01-17T00:04:05.006231Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:04:05.009895 waagent[1899]: 2026-01-17T00:04:05.009867Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:04:05.081139 waagent[1899]: 2026-01-17T00:04:05.081005Z INFO ExtHandler Downloaded certificate {'thumbprint': '2437102BED211B1006A5970D2AFE750A4787EAD5', 'hasPrivateKey': True} Jan 17 00:04:05.081633 waagent[1899]: 2026-01-17T00:04:05.081588Z INFO ExtHandler Fetch goal state completed Jan 17 00:04:05.096874 waagent[1899]: 2026-01-17T00:04:05.096827Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1899 Jan 17 00:04:05.097023 waagent[1899]: 2026-01-17T00:04:05.096991Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 00:04:05.098664 waagent[1899]: 2026-01-17T00:04:05.098623Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 00:04:05.099020 waagent[1899]: 2026-01-17T00:04:05.098986Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 00:04:05.118882 waagent[1899]: 2026-01-17T00:04:05.118838Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 00:04:05.119070 waagent[1899]: 2026-01-17T00:04:05.119034Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 00:04:05.124784 waagent[1899]: 2026-01-17T00:04:05.124748Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 00:04:05.130904 systemd[1]: Reloading requested from client PID 1912 ('systemctl') (unit waagent.service)... Jan 17 00:04:05.130918 systemd[1]: Reloading... Jan 17 00:04:05.210576 zram_generator::config[1947]: No configuration found. Jan 17 00:04:05.321511 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:04:05.399484 systemd[1]: Reloading finished in 268 ms. Jan 17 00:04:05.419077 waagent[1899]: 2026-01-17T00:04:05.418728Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 00:04:05.425416 systemd[1]: Reloading requested from client PID 2001 ('systemctl') (unit waagent.service)... Jan 17 00:04:05.425428 systemd[1]: Reloading... Jan 17 00:04:05.488810 zram_generator::config[2033]: No configuration found. Jan 17 00:04:05.599981 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:04:05.673963 systemd[1]: Reloading finished in 248 ms. Jan 17 00:04:05.698554 waagent[1899]: 2026-01-17T00:04:05.696133Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 00:04:05.698554 waagent[1899]: 2026-01-17T00:04:05.696287Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 00:04:06.144823 waagent[1899]: 2026-01-17T00:04:06.144566Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 00:04:06.148559 waagent[1899]: 2026-01-17T00:04:06.148007Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 00:04:06.148832 waagent[1899]: 2026-01-17T00:04:06.148790Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:04:06.148903 waagent[1899]: 2026-01-17T00:04:06.148877Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:04:06.149103 waagent[1899]: 2026-01-17T00:04:06.149069Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 00:04:06.149227 waagent[1899]: 2026-01-17T00:04:06.149155Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 00:04:06.149771 waagent[1899]: 2026-01-17T00:04:06.149635Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 00:04:06.149771 waagent[1899]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 00:04:06.149771 waagent[1899]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 00:04:06.149771 waagent[1899]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 00:04:06.149771 waagent[1899]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:04:06.149771 waagent[1899]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:04:06.149771 waagent[1899]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:04:06.149771 waagent[1899]: 2026-01-17T00:04:06.149702Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 00:04:06.149947 waagent[1899]: 2026-01-17T00:04:06.149821Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:04:06.149947 waagent[1899]: 2026-01-17T00:04:06.149900Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:04:06.150111 waagent[1899]: 2026-01-17T00:04:06.150011Z INFO EnvHandler ExtHandler Configure routes Jan 17 00:04:06.150212 waagent[1899]: 2026-01-17T00:04:06.150152Z INFO EnvHandler ExtHandler Gateway:None Jan 17 00:04:06.150878 waagent[1899]: 2026-01-17T00:04:06.150699Z INFO EnvHandler ExtHandler Routes:None Jan 17 00:04:06.151121 waagent[1899]: 2026-01-17T00:04:06.151083Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 00:04:06.151221 waagent[1899]: 2026-01-17T00:04:06.151171Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 00:04:06.151612 waagent[1899]: 2026-01-17T00:04:06.151558Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 00:04:06.151721 waagent[1899]: 2026-01-17T00:04:06.151676Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 00:04:06.151832 waagent[1899]: 2026-01-17T00:04:06.151797Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 00:04:06.157291 waagent[1899]: 2026-01-17T00:04:06.157245Z INFO ExtHandler ExtHandler Jan 17 00:04:06.157651 waagent[1899]: 2026-01-17T00:04:06.157607Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f28f320a-27e7-4270-ad05-d4f8ba065afd correlation 04e7c66d-ba44-4fbe-b037-828eee38ac2e created: 2026-01-17T00:03:04.549530Z] Jan 17 00:04:06.158706 waagent[1899]: 2026-01-17T00:04:06.158664Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:04:06.159550 waagent[1899]: 2026-01-17T00:04:06.159302Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 17 00:04:06.186324 waagent[1899]: 2026-01-17T00:04:06.186269Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A4A89AFC-535E-46C2-B117-04DD9F378063;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 00:04:06.189221 waagent[1899]: 2026-01-17T00:04:06.188834Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 00:04:06.189221 waagent[1899]: Executing ['ip', '-a', '-o', 'link']: Jan 17 00:04:06.189221 waagent[1899]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 00:04:06.189221 waagent[1899]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:86:19 brd ff:ff:ff:ff:ff:ff Jan 17 00:04:06.189221 waagent[1899]: 3: enP6208s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:86:19 brd ff:ff:ff:ff:ff:ff\ altname enP6208p0s2 Jan 17 00:04:06.189221 waagent[1899]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 00:04:06.189221 waagent[1899]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 00:04:06.189221 waagent[1899]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 00:04:06.189221 waagent[1899]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 00:04:06.189221 waagent[1899]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 00:04:06.189221 waagent[1899]: 2: eth0 inet6 fe80::222:48ff:fe7b:8619/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 00:04:06.270509 waagent[1899]: 2026-01-17T00:04:06.269686Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 00:04:06.270509 waagent[1899]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:04:06.270509 waagent[1899]: pkts bytes target prot opt in out source destination Jan 17 00:04:06.270509 waagent[1899]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:04:06.270509 waagent[1899]: pkts bytes target prot opt in out source destination Jan 17 00:04:06.270509 waagent[1899]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:04:06.270509 waagent[1899]: pkts bytes target prot opt in out source destination Jan 17 00:04:06.270509 waagent[1899]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:04:06.270509 waagent[1899]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:04:06.270509 waagent[1899]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:04:06.272416 waagent[1899]: 2026-01-17T00:04:06.272372Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 00:04:06.272416 waagent[1899]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:04:06.272416 waagent[1899]: pkts bytes target prot opt in out source destination Jan 17 00:04:06.272416 waagent[1899]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:04:06.272416 waagent[1899]: pkts bytes target prot opt in out source destination Jan 17 00:04:06.272416 waagent[1899]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:04:06.272416 waagent[1899]: pkts bytes target prot opt in out source destination Jan 17 00:04:06.272416 waagent[1899]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:04:06.272416 waagent[1899]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:04:06.272416 waagent[1899]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:04:06.272919 waagent[1899]: 2026-01-17T00:04:06.272884Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 00:04:11.691442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:04:11.699772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:04:11.797492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:04:11.800784 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:04:11.929927 kubelet[2128]: E0117 00:04:11.929876 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:04:11.933435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:04:11.933704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:04:21.088488 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:04:21.095715 systemd[1]: Started sshd@0-10.200.20.17:22-10.200.16.10:56176.service - OpenSSH per-connection server daemon (10.200.16.10:56176). Jan 17 00:04:21.721756 sshd[2136]: Accepted publickey for core from 10.200.16.10 port 56176 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:21.723059 sshd[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:21.727597 systemd-logind[1696]: New session 3 of user core. Jan 17 00:04:21.732646 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:04:21.941376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:04:21.949671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:04:22.150704 systemd[1]: Started sshd@1-10.200.20.17:22-10.200.16.10:56182.service - OpenSSH per-connection server daemon (10.200.16.10:56182). Jan 17 00:04:22.292911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:04:22.296912 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:04:22.333053 kubelet[2151]: E0117 00:04:22.333006 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:04:22.335274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:04:22.335394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:04:22.649651 sshd[2144]: Accepted publickey for core from 10.200.16.10 port 56182 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:22.650957 sshd[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:22.655030 systemd-logind[1696]: New session 4 of user core. Jan 17 00:04:22.664720 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:04:23.003971 sshd[2144]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:23.007932 systemd[1]: sshd@1-10.200.20.17:22-10.200.16.10:56182.service: Deactivated successfully. Jan 17 00:04:23.009483 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:04:23.010191 systemd-logind[1696]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:04:23.011285 systemd-logind[1696]: Removed session 4. Jan 17 00:04:23.095091 systemd[1]: Started sshd@2-10.200.20.17:22-10.200.16.10:56186.service - OpenSSH per-connection server daemon (10.200.16.10:56186). Jan 17 00:04:23.586538 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 56186 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:23.587853 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:23.592161 systemd-logind[1696]: New session 5 of user core. Jan 17 00:04:23.597759 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:04:23.690104 chronyd[1684]: Selected source PHC0 Jan 17 00:04:23.933590 sshd[2164]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:23.936810 systemd[1]: sshd@2-10.200.20.17:22-10.200.16.10:56186.service: Deactivated successfully. Jan 17 00:04:23.938465 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:04:23.939205 systemd-logind[1696]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:04:23.939948 systemd-logind[1696]: Removed session 5. Jan 17 00:04:24.017193 systemd[1]: Started sshd@3-10.200.20.17:22-10.200.16.10:56200.service - OpenSSH per-connection server daemon (10.200.16.10:56200). Jan 17 00:04:24.470075 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 56200 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:24.471355 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:24.475693 systemd-logind[1696]: New session 6 of user core. Jan 17 00:04:24.482683 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:04:24.802304 sshd[2171]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:24.804804 systemd[1]: sshd@3-10.200.20.17:22-10.200.16.10:56200.service: Deactivated successfully. Jan 17 00:04:24.806617 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:04:24.808443 systemd-logind[1696]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:04:24.810266 systemd-logind[1696]: Removed session 6. Jan 17 00:04:24.894371 systemd[1]: Started sshd@4-10.200.20.17:22-10.200.16.10:56208.service - OpenSSH per-connection server daemon (10.200.16.10:56208). Jan 17 00:04:25.382333 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 56208 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:25.383635 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:25.387121 systemd-logind[1696]: New session 7 of user core. Jan 17 00:04:25.395639 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:04:25.825797 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:04:25.826059 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:04:25.840335 sudo[2181]: pam_unix(sudo:session): session closed for user root Jan 17 00:04:25.912306 sshd[2178]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:25.915947 systemd[1]: sshd@4-10.200.20.17:22-10.200.16.10:56208.service: Deactivated successfully. Jan 17 00:04:25.917409 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:04:25.918231 systemd-logind[1696]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:04:25.919434 systemd-logind[1696]: Removed session 7. Jan 17 00:04:26.002975 systemd[1]: Started sshd@5-10.200.20.17:22-10.200.16.10:56216.service - OpenSSH per-connection server daemon (10.200.16.10:56216). Jan 17 00:04:26.489929 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 56216 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:26.491282 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:26.494801 systemd-logind[1696]: New session 8 of user core. Jan 17 00:04:26.505694 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:04:26.764814 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:04:26.765075 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:04:26.767911 sudo[2190]: pam_unix(sudo:session): session closed for user root Jan 17 00:04:26.771892 sudo[2189]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:04:26.772128 sudo[2189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:04:26.784730 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:04:26.785792 auditctl[2193]: No rules Jan 17 00:04:26.786088 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:04:26.786241 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:04:26.788839 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:04:26.809139 augenrules[2211]: No rules Jan 17 00:04:26.811624 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:04:26.813742 sudo[2189]: pam_unix(sudo:session): session closed for user root Jan 17 00:04:26.890896 sshd[2186]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:26.893464 systemd[1]: sshd@5-10.200.20.17:22-10.200.16.10:56216.service: Deactivated successfully. Jan 17 00:04:26.894894 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:04:26.896080 systemd-logind[1696]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:04:26.897150 systemd-logind[1696]: Removed session 8. Jan 17 00:04:26.974847 systemd[1]: Started sshd@6-10.200.20.17:22-10.200.16.10:56226.service - OpenSSH per-connection server daemon (10.200.16.10:56226). Jan 17 00:04:27.426945 sshd[2219]: Accepted publickey for core from 10.200.16.10 port 56226 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:27.428277 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:27.431873 systemd-logind[1696]: New session 9 of user core. Jan 17 00:04:27.438736 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:04:27.680919 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:04:27.681394 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:04:28.940766 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:04:28.942000 (dockerd)[2237]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:04:29.688453 dockerd[2237]: time="2026-01-17T00:04:29.688397152Z" level=info msg="Starting up" Jan 17 00:04:31.168934 dockerd[2237]: time="2026-01-17T00:04:31.168891872Z" level=info msg="Loading containers: start." Jan 17 00:04:31.427555 kernel: Initializing XFRM netlink socket Jan 17 00:04:31.625636 systemd-networkd[1356]: docker0: Link UP Jan 17 00:04:31.644549 dockerd[2237]: time="2026-01-17T00:04:31.644501392Z" level=info msg="Loading containers: done." Jan 17 00:04:32.012769 dockerd[2237]: time="2026-01-17T00:04:32.012705670Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:04:32.012913 dockerd[2237]: time="2026-01-17T00:04:32.012841790Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:04:32.013008 dockerd[2237]: time="2026-01-17T00:04:32.012983270Z" level=info msg="Daemon has completed initialization" Jan 17 00:04:32.215301 dockerd[2237]: time="2026-01-17T00:04:32.214713250Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:04:32.214808 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:04:32.441304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:04:32.446784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:04:32.741090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:04:32.745838 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:04:32.780732 kubelet[2379]: E0117 00:04:32.780672 2379 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:04:32.783567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:04:32.783871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:04:33.041153 containerd[1715]: time="2026-01-17T00:04:33.041044587Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 17 00:04:37.393736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount856847180.mount: Deactivated successfully. Jan 17 00:04:42.278398 waagent[1899]: 2026-01-17T00:04:42.278332Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 17 00:04:42.284880 waagent[1899]: 2026-01-17T00:04:42.284823Z INFO ExtHandler Jan 17 00:04:42.284985 waagent[1899]: 2026-01-17T00:04:42.284941Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 173d19c4-753b-4e8d-9169-a3e568570820 eTag: 18047104200170345117 source: Fabric] Jan 17 00:04:42.285312 waagent[1899]: 2026-01-17T00:04:42.285267Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:04:42.285930 waagent[1899]: 2026-01-17T00:04:42.285867Z INFO ExtHandler Jan 17 00:04:42.285986 waagent[1899]: 2026-01-17T00:04:42.285956Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 17 00:04:42.375737 waagent[1899]: 2026-01-17T00:04:42.375691Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:04:42.447119 waagent[1899]: 2026-01-17T00:04:42.447036Z INFO ExtHandler Downloaded certificate {'thumbprint': '2437102BED211B1006A5970D2AFE750A4787EAD5', 'hasPrivateKey': True} Jan 17 00:04:42.447976 waagent[1899]: 2026-01-17T00:04:42.447591Z INFO ExtHandler Fetch goal state completed Jan 17 00:04:42.448814 waagent[1899]: 2026-01-17T00:04:42.448750Z INFO ExtHandler ExtHandler Jan 17 00:04:42.448892 waagent[1899]: 2026-01-17T00:04:42.448862Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 2b2a9dae-6f02-4635-84ee-688cd9ef4114 correlation 04e7c66d-ba44-4fbe-b037-828eee38ac2e created: 2026-01-17T00:04:33.049904Z] Jan 17 00:04:42.449386 waagent[1899]: 2026-01-17T00:04:42.449323Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:04:42.450287 waagent[1899]: 2026-01-17T00:04:42.450238Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Jan 17 00:04:42.570567 containerd[1715]: time="2026-01-17T00:04:42.569693585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:42.571999 containerd[1715]: time="2026-01-17T00:04:42.571960828Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 17 00:04:42.574607 containerd[1715]: time="2026-01-17T00:04:42.574562431Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:42.579846 containerd[1715]: time="2026-01-17T00:04:42.578557756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:42.579846 containerd[1715]: time="2026-01-17T00:04:42.579653038Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 9.538567251s" Jan 17 00:04:42.579846 containerd[1715]: time="2026-01-17T00:04:42.579683558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 17 00:04:42.581248 containerd[1715]: time="2026-01-17T00:04:42.581208640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 17 00:04:42.941306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:04:42.949688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:04:43.049245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:04:43.053208 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:04:43.142064 kubelet[2457]: E0117 00:04:43.142017 2457 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:04:43.144916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:04:43.145162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:04:43.470711 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 17 00:04:44.950230 update_engine[1699]: I20260117 00:04:44.949563 1699 update_attempter.cc:509] Updating boot flags... Jan 17 00:04:46.668586 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2475) Jan 17 00:04:46.747747 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2478) Jan 17 00:04:46.819093 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2478) Jan 17 00:04:48.188563 containerd[1715]: time="2026-01-17T00:04:48.187660525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:48.190956 containerd[1715]: time="2026-01-17T00:04:48.190921726Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 17 00:04:48.193454 containerd[1715]: time="2026-01-17T00:04:48.193407167Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:48.198934 containerd[1715]: time="2026-01-17T00:04:48.197578249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:48.198934 containerd[1715]: time="2026-01-17T00:04:48.198636049Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 5.617381889s" Jan 17 00:04:48.198934 containerd[1715]: time="2026-01-17T00:04:48.198664209Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 17 00:04:48.199246 containerd[1715]: time="2026-01-17T00:04:48.199218169Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 17 00:04:52.331364 containerd[1715]: time="2026-01-17T00:04:52.331294904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:52.333346 containerd[1715]: time="2026-01-17T00:04:52.333313825Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 17 00:04:52.335652 containerd[1715]: time="2026-01-17T00:04:52.335604306Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:52.340614 containerd[1715]: time="2026-01-17T00:04:52.340552268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:04:52.341810 containerd[1715]: time="2026-01-17T00:04:52.341698588Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 4.142388699s" Jan 17 00:04:52.341810 containerd[1715]: time="2026-01-17T00:04:52.341727828Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 17 00:04:52.342805 containerd[1715]: time="2026-01-17T00:04:52.342769709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:04:53.191391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:04:53.198754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:04:53.648029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:04:53.652014 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:04:53.738800 kubelet[2573]: E0117 00:04:53.738736 2573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:04:53.741574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:04:53.741723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:04:59.787693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount502998607.mount: Deactivated successfully. Jan 17 00:05:00.124550 containerd[1715]: time="2026-01-17T00:05:00.123800902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:00.126156 containerd[1715]: time="2026-01-17T00:05:00.126125582Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 17 00:05:00.128687 containerd[1715]: time="2026-01-17T00:05:00.128646743Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:00.135323 containerd[1715]: time="2026-01-17T00:05:00.135263266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:00.136001 containerd[1715]: time="2026-01-17T00:05:00.135883666Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 7.792992317s" Jan 17 00:05:00.136001 containerd[1715]: time="2026-01-17T00:05:00.135915066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 17 00:05:00.136496 containerd[1715]: time="2026-01-17T00:05:00.136467627Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 17 00:05:00.797793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229938070.mount: Deactivated successfully. Jan 17 00:05:01.759295 containerd[1715]: time="2026-01-17T00:05:01.759244970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:01.761836 containerd[1715]: time="2026-01-17T00:05:01.761808491Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 17 00:05:01.764334 containerd[1715]: time="2026-01-17T00:05:01.764309852Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:01.768671 containerd[1715]: time="2026-01-17T00:05:01.768605374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:01.769886 containerd[1715]: time="2026-01-17T00:05:01.769762134Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.633261747s" Jan 17 00:05:01.769886 containerd[1715]: time="2026-01-17T00:05:01.769794854Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 17 00:05:01.770506 containerd[1715]: time="2026-01-17T00:05:01.770329215Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:05:02.298267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152484261.mount: Deactivated successfully. Jan 17 00:05:02.316021 containerd[1715]: time="2026-01-17T00:05:02.315973998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:02.318633 containerd[1715]: time="2026-01-17T00:05:02.318447479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 17 00:05:02.321317 containerd[1715]: time="2026-01-17T00:05:02.321295200Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:02.325345 containerd[1715]: time="2026-01-17T00:05:02.325313841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:02.326482 containerd[1715]: time="2026-01-17T00:05:02.326154402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 555.795827ms" Jan 17 00:05:02.326482 containerd[1715]: time="2026-01-17T00:05:02.326184082Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 00:05:02.326878 containerd[1715]: time="2026-01-17T00:05:02.326851002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 17 00:05:02.946110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650725800.mount: Deactivated successfully. Jan 17 00:05:03.941405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 00:05:03.949776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:04.055048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:04.058739 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:05:04.098062 kubelet[2660]: E0117 00:05:04.098016 2660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:05:04.100851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:05:04.100993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:05:14.191552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 17 00:05:14.201146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:14.468449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:14.472921 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:05:14.504104 kubelet[2695]: E0117 00:05:14.504051 2695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:05:14.506916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:05:14.507055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:05:23.558441 containerd[1715]: time="2026-01-17T00:05:23.558380634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:23.561132 containerd[1715]: time="2026-01-17T00:05:23.561076355Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 17 00:05:23.563837 containerd[1715]: time="2026-01-17T00:05:23.563793356Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:23.569234 containerd[1715]: time="2026-01-17T00:05:23.568812877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:23.570026 containerd[1715]: time="2026-01-17T00:05:23.569994718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 21.243107316s" Jan 17 00:05:23.570079 containerd[1715]: time="2026-01-17T00:05:23.570025398Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 17 00:05:24.692078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 17 00:05:24.699684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:24.809789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:24.814076 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:05:24.881043 kubelet[2756]: E0117 00:05:24.881001 2756 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:05:24.883968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:05:24.884649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:05:29.329562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:29.339810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:29.364655 systemd[1]: Reloading requested from client PID 2770 ('systemctl') (unit session-9.scope)... Jan 17 00:05:29.364790 systemd[1]: Reloading... Jan 17 00:05:29.479476 zram_generator::config[2806]: No configuration found. Jan 17 00:05:29.578924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:05:29.656220 systemd[1]: Reloading finished in 291 ms. Jan 17 00:05:29.702488 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:05:29.702580 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:05:29.702911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:29.707842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:29.854971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:29.859970 (kubelet)[2878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:05:29.892169 kubelet[2878]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:05:29.892629 kubelet[2878]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:05:29.892629 kubelet[2878]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:05:29.892629 kubelet[2878]: I0117 00:05:29.892570 2878 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:05:30.660550 kubelet[2878]: I0117 00:05:30.659732 2878 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:05:30.660550 kubelet[2878]: I0117 00:05:30.659762 2878 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:05:30.660550 kubelet[2878]: I0117 00:05:30.660142 2878 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:05:30.679341 kubelet[2878]: E0117 00:05:30.679285 2878 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:05:30.683015 kubelet[2878]: I0117 00:05:30.682990 2878 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:05:30.691843 kubelet[2878]: E0117 00:05:30.691812 2878 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:05:30.691952 kubelet[2878]: I0117 00:05:30.691940 2878 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:05:30.695356 kubelet[2878]: I0117 00:05:30.695336 2878 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:05:30.696709 kubelet[2878]: I0117 00:05:30.696675 2878 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:05:30.696942 kubelet[2878]: I0117 00:05:30.696793 2878 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-f5e0a482e1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:05:30.697078 kubelet[2878]: I0117 00:05:30.697066 2878 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:05:30.697130 kubelet[2878]: I0117 00:05:30.697123 2878 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:05:30.697300 kubelet[2878]: I0117 00:05:30.697288 2878 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:05:30.700095 kubelet[2878]: I0117 00:05:30.700079 2878 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:05:30.700181 kubelet[2878]: I0117 00:05:30.700171 2878 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:05:30.700250 kubelet[2878]: I0117 00:05:30.700242 2878 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:05:30.701516 kubelet[2878]: I0117 00:05:30.701496 2878 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:05:30.704285 kubelet[2878]: E0117 00:05:30.704260 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f5e0a482e1&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:05:30.704926 kubelet[2878]: E0117 00:05:30.704898 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:05:30.705029 kubelet[2878]: I0117 00:05:30.705012 2878 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:05:30.705624 kubelet[2878]: I0117 00:05:30.705605 2878 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:05:30.705679 kubelet[2878]: W0117 00:05:30.705666 2878 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:05:30.708205 kubelet[2878]: I0117 00:05:30.708093 2878 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:05:30.708205 kubelet[2878]: I0117 00:05:30.708137 2878 server.go:1289] "Started kubelet" Jan 17 00:05:30.709555 kubelet[2878]: I0117 00:05:30.709058 2878 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:05:30.709963 kubelet[2878]: I0117 00:05:30.709947 2878 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:05:30.710655 kubelet[2878]: I0117 00:05:30.710602 2878 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:05:30.711020 kubelet[2878]: I0117 00:05:30.710992 2878 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:05:30.712907 kubelet[2878]: E0117 00:05:30.711381 2878 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-f5e0a482e1.188b5be802a75500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-f5e0a482e1,UID:ci-4081.3.6-n-f5e0a482e1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-f5e0a482e1,},FirstTimestamp:2026-01-17 00:05:30.70811264 +0000 UTC m=+0.844920040,LastTimestamp:2026-01-17 00:05:30.70811264 +0000 UTC m=+0.844920040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-f5e0a482e1,}" Jan 17 00:05:30.715945 kubelet[2878]: E0117 00:05:30.715924 2878 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:05:30.716764 kubelet[2878]: I0117 00:05:30.716739 2878 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:05:30.718251 kubelet[2878]: I0117 00:05:30.718230 2878 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:05:30.719999 kubelet[2878]: I0117 00:05:30.718611 2878 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:05:30.720886 kubelet[2878]: I0117 00:05:30.718632 2878 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:05:30.720886 kubelet[2878]: E0117 00:05:30.718741 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:30.720968 kubelet[2878]: I0117 00:05:30.720926 2878 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:05:30.721227 kubelet[2878]: E0117 00:05:30.721203 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:05:30.721276 kubelet[2878]: E0117 00:05:30.721257 2878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f5e0a482e1?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="200ms" Jan 17 00:05:30.724548 kubelet[2878]: I0117 00:05:30.723260 2878 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:05:30.724548 kubelet[2878]: I0117 00:05:30.723274 2878 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:05:30.724548 kubelet[2878]: I0117 00:05:30.723327 2878 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:05:30.724548 kubelet[2878]: I0117 00:05:30.724102 2878 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:05:30.760607 kubelet[2878]: I0117 00:05:30.760571 2878 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:05:30.760607 kubelet[2878]: I0117 00:05:30.760604 2878 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:05:30.760743 kubelet[2878]: I0117 00:05:30.760628 2878 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:05:30.760743 kubelet[2878]: I0117 00:05:30.760635 2878 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:05:30.760743 kubelet[2878]: E0117 00:05:30.760674 2878 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:05:30.762878 kubelet[2878]: E0117 00:05:30.762744 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:05:30.821204 kubelet[2878]: E0117 00:05:30.821175 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:30.825487 kubelet[2878]: I0117 00:05:30.825466 2878 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:05:30.825487 kubelet[2878]: I0117 00:05:30.825482 2878 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:05:30.825614 kubelet[2878]: I0117 00:05:30.825502 2878 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:05:30.836517 kubelet[2878]: I0117 00:05:30.836494 2878 policy_none.go:49] "None policy: Start" Jan 17 00:05:30.836517 kubelet[2878]: I0117 00:05:30.836516 2878 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:05:30.836640 kubelet[2878]: I0117 00:05:30.836537 2878 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:05:30.851043 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:05:30.859447 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:05:30.861302 kubelet[2878]: E0117 00:05:30.861283 2878 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:05:30.872092 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:05:30.873262 kubelet[2878]: E0117 00:05:30.873239 2878 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:05:30.873436 kubelet[2878]: I0117 00:05:30.873421 2878 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:05:30.873480 kubelet[2878]: I0117 00:05:30.873436 2878 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:05:30.874004 kubelet[2878]: I0117 00:05:30.873749 2878 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:05:30.875134 kubelet[2878]: E0117 00:05:30.875111 2878 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:05:30.875201 kubelet[2878]: E0117 00:05:30.875166 2878 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:30.921930 kubelet[2878]: E0117 00:05:30.921820 2878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f5e0a482e1?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="400ms" Jan 17 00:05:30.975377 kubelet[2878]: I0117 00:05:30.975331 2878 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:30.975690 kubelet[2878]: E0117 00:05:30.975662 2878 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.075387 systemd[1]: Created slice kubepods-burstable-pod1476f040af750779544b6696b24ce5d3.slice - libcontainer container kubepods-burstable-pod1476f040af750779544b6696b24ce5d3.slice. Jan 17 00:05:31.083193 kubelet[2878]: E0117 00:05:31.083156 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.087017 systemd[1]: Created slice kubepods-burstable-pod9c80ac9745f453803a11deda78f4aaf9.slice - libcontainer container kubepods-burstable-pod9c80ac9745f453803a11deda78f4aaf9.slice. Jan 17 00:05:31.088951 kubelet[2878]: E0117 00:05:31.088927 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.090426 systemd[1]: Created slice kubepods-burstable-pod5ccc2459b2c46195981bb2af963c1a2d.slice - libcontainer container kubepods-burstable-pod5ccc2459b2c46195981bb2af963c1a2d.slice. Jan 17 00:05:31.092097 kubelet[2878]: E0117 00:05:31.092074 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.122427 kubelet[2878]: I0117 00:05:31.122404 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.122481 kubelet[2878]: I0117 00:05:31.122439 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.122481 kubelet[2878]: I0117 00:05:31.122460 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.122481 kubelet[2878]: I0117 00:05:31.122475 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.122573 kubelet[2878]: I0117 00:05:31.122491 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ccc2459b2c46195981bb2af963c1a2d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-f5e0a482e1\" (UID: \"5ccc2459b2c46195981bb2af963c1a2d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.122573 kubelet[2878]: I0117 00:05:31.122505 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1476f040af750779544b6696b24ce5d3-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f5e0a482e1\" (UID: \"1476f040af750779544b6696b24ce5d3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.122573 kubelet[2878]: I0117 00:05:31.122518 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1476f040af750779544b6696b24ce5d3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f5e0a482e1\" (UID: \"1476f040af750779544b6696b24ce5d3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.122573 kubelet[2878]: I0117 00:05:31.122541 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1476f040af750779544b6696b24ce5d3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-f5e0a482e1\" (UID: \"1476f040af750779544b6696b24ce5d3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.122573 kubelet[2878]: I0117 00:05:31.122557 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.178322 kubelet[2878]: I0117 00:05:31.177926 2878 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.178322 kubelet[2878]: E0117 00:05:31.178226 2878 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.322412 kubelet[2878]: E0117 00:05:31.322371 2878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f5e0a482e1?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="800ms" Jan 17 00:05:31.384514 containerd[1715]: time="2026-01-17T00:05:31.384255542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-f5e0a482e1,Uid:1476f040af750779544b6696b24ce5d3,Namespace:kube-system,Attempt:0,}" Jan 17 00:05:31.390243 containerd[1715]: time="2026-01-17T00:05:31.390209384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-f5e0a482e1,Uid:9c80ac9745f453803a11deda78f4aaf9,Namespace:kube-system,Attempt:0,}" Jan 17 00:05:31.393070 containerd[1715]: time="2026-01-17T00:05:31.392949305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-f5e0a482e1,Uid:5ccc2459b2c46195981bb2af963c1a2d,Namespace:kube-system,Attempt:0,}" Jan 17 00:05:31.580752 kubelet[2878]: I0117 00:05:31.580714 2878 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.581053 kubelet[2878]: E0117 00:05:31.581027 2878 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:31.598390 kubelet[2878]: E0117 00:05:31.598355 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:05:31.630420 kubelet[2878]: E0117 00:05:31.630376 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f5e0a482e1&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:05:31.907429 kubelet[2878]: E0117 00:05:31.907318 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:05:31.992046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073042753.mount: Deactivated successfully. Jan 17 00:05:32.024897 containerd[1715]: time="2026-01-17T00:05:32.024840553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:05:32.027126 containerd[1715]: time="2026-01-17T00:05:32.027091834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 00:05:32.029472 containerd[1715]: time="2026-01-17T00:05:32.029438274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:05:32.032133 containerd[1715]: time="2026-01-17T00:05:32.031427475Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:05:32.033618 containerd[1715]: time="2026-01-17T00:05:32.033580036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:05:32.036192 containerd[1715]: time="2026-01-17T00:05:32.036153717Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:05:32.038757 containerd[1715]: time="2026-01-17T00:05:32.038698517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:05:32.042311 containerd[1715]: time="2026-01-17T00:05:32.042279199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:05:32.043495 containerd[1715]: time="2026-01-17T00:05:32.043044159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 652.773655ms" Jan 17 00:05:32.044708 containerd[1715]: time="2026-01-17T00:05:32.044680599Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 660.347177ms" Jan 17 00:05:32.047778 containerd[1715]: time="2026-01-17T00:05:32.047745040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 654.742295ms" Jan 17 00:05:32.122976 kubelet[2878]: E0117 00:05:32.122936 2878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f5e0a482e1?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="1.6s" Jan 17 00:05:32.351205 kubelet[2878]: E0117 00:05:32.351160 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:05:32.382941 kubelet[2878]: I0117 00:05:32.382915 2878 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:32.383212 kubelet[2878]: E0117 00:05:32.383189 2878 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:32.709995 kubelet[2878]: E0117 00:05:32.709940 2878 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:05:33.508886 containerd[1715]: time="2026-01-17T00:05:33.508515161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:05:33.508886 containerd[1715]: time="2026-01-17T00:05:33.508586401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:05:33.508886 containerd[1715]: time="2026-01-17T00:05:33.508632881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:33.508886 containerd[1715]: time="2026-01-17T00:05:33.508733161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:33.510848 containerd[1715]: time="2026-01-17T00:05:33.510548082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:05:33.510848 containerd[1715]: time="2026-01-17T00:05:33.510587762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:05:33.510848 containerd[1715]: time="2026-01-17T00:05:33.510597642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:33.510848 containerd[1715]: time="2026-01-17T00:05:33.510662122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:33.521654 containerd[1715]: time="2026-01-17T00:05:33.520778325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:05:33.521654 containerd[1715]: time="2026-01-17T00:05:33.520823805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:05:33.521654 containerd[1715]: time="2026-01-17T00:05:33.520849925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:33.521654 containerd[1715]: time="2026-01-17T00:05:33.520940845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:33.541741 systemd[1]: Started cri-containerd-53b4840c557a31c9eb69aff0f38ee3bf99210898f2e20a02c57ba9e90915350f.scope - libcontainer container 53b4840c557a31c9eb69aff0f38ee3bf99210898f2e20a02c57ba9e90915350f. Jan 17 00:05:33.546318 systemd[1]: Started cri-containerd-8a4b13f01128f3f6568a42310e527bbed177ca542a689860f8ec4cb651d192ea.scope - libcontainer container 8a4b13f01128f3f6568a42310e527bbed177ca542a689860f8ec4cb651d192ea. Jan 17 00:05:33.568692 systemd[1]: Started cri-containerd-b423bcfb01f17cf7aa7431996fa90ad5b48b5169dd76a106cf35fc4367f0ea62.scope - libcontainer container b423bcfb01f17cf7aa7431996fa90ad5b48b5169dd76a106cf35fc4367f0ea62. Jan 17 00:05:33.601063 containerd[1715]: time="2026-01-17T00:05:33.601018232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-f5e0a482e1,Uid:9c80ac9745f453803a11deda78f4aaf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a4b13f01128f3f6568a42310e527bbed177ca542a689860f8ec4cb651d192ea\"" Jan 17 00:05:33.615336 containerd[1715]: time="2026-01-17T00:05:33.614649876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-f5e0a482e1,Uid:1476f040af750779544b6696b24ce5d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b423bcfb01f17cf7aa7431996fa90ad5b48b5169dd76a106cf35fc4367f0ea62\"" Jan 17 00:05:33.622078 containerd[1715]: time="2026-01-17T00:05:33.622048519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-f5e0a482e1,Uid:5ccc2459b2c46195981bb2af963c1a2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53b4840c557a31c9eb69aff0f38ee3bf99210898f2e20a02c57ba9e90915350f\"" Jan 17 00:05:33.622421 containerd[1715]: time="2026-01-17T00:05:33.622377999Z" level=info msg="CreateContainer within sandbox \"8a4b13f01128f3f6568a42310e527bbed177ca542a689860f8ec4cb651d192ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:05:33.625980 containerd[1715]: time="2026-01-17T00:05:33.625948160Z" level=info msg="CreateContainer within sandbox \"b423bcfb01f17cf7aa7431996fa90ad5b48b5169dd76a106cf35fc4367f0ea62\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:05:33.629356 containerd[1715]: time="2026-01-17T00:05:33.629025721Z" level=info msg="CreateContainer within sandbox \"53b4840c557a31c9eb69aff0f38ee3bf99210898f2e20a02c57ba9e90915350f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:05:33.680343 containerd[1715]: time="2026-01-17T00:05:33.680299818Z" level=info msg="CreateContainer within sandbox \"b423bcfb01f17cf7aa7431996fa90ad5b48b5169dd76a106cf35fc4367f0ea62\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a356cdc456ba13c6ef62f29a3b6a851c7a537efd7233b3cbd5348b914e25a497\"" Jan 17 00:05:33.684569 containerd[1715]: time="2026-01-17T00:05:33.684540419Z" level=info msg="StartContainer for \"a356cdc456ba13c6ef62f29a3b6a851c7a537efd7233b3cbd5348b914e25a497\"" Jan 17 00:05:33.685150 containerd[1715]: time="2026-01-17T00:05:33.685125899Z" level=info msg="CreateContainer within sandbox \"8a4b13f01128f3f6568a42310e527bbed177ca542a689860f8ec4cb651d192ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3cfbb2ee12a0b3ee7db6a58db07a089f4218aba8ed7ea89be45e86f462d75b11\"" Jan 17 00:05:33.686221 containerd[1715]: time="2026-01-17T00:05:33.686147780Z" level=info msg="CreateContainer within sandbox \"53b4840c557a31c9eb69aff0f38ee3bf99210898f2e20a02c57ba9e90915350f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"065b4e056c64f7a0adf2a7d1acd9570b5e42b7d830707c64d6930a67c5e3bd8b\"" Jan 17 00:05:33.687923 containerd[1715]: time="2026-01-17T00:05:33.686981900Z" level=info msg="StartContainer for \"065b4e056c64f7a0adf2a7d1acd9570b5e42b7d830707c64d6930a67c5e3bd8b\"" Jan 17 00:05:33.688045 containerd[1715]: time="2026-01-17T00:05:33.688026340Z" level=info msg="StartContainer for \"3cfbb2ee12a0b3ee7db6a58db07a089f4218aba8ed7ea89be45e86f462d75b11\"" Jan 17 00:05:33.718711 systemd[1]: Started cri-containerd-a356cdc456ba13c6ef62f29a3b6a851c7a537efd7233b3cbd5348b914e25a497.scope - libcontainer container a356cdc456ba13c6ef62f29a3b6a851c7a537efd7233b3cbd5348b914e25a497. Jan 17 00:05:33.724094 kubelet[2878]: E0117 00:05:33.723506 2878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f5e0a482e1?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="3.2s" Jan 17 00:05:33.728703 systemd[1]: Started cri-containerd-065b4e056c64f7a0adf2a7d1acd9570b5e42b7d830707c64d6930a67c5e3bd8b.scope - libcontainer container 065b4e056c64f7a0adf2a7d1acd9570b5e42b7d830707c64d6930a67c5e3bd8b. Jan 17 00:05:33.730430 systemd[1]: Started cri-containerd-3cfbb2ee12a0b3ee7db6a58db07a089f4218aba8ed7ea89be45e86f462d75b11.scope - libcontainer container 3cfbb2ee12a0b3ee7db6a58db07a089f4218aba8ed7ea89be45e86f462d75b11. Jan 17 00:05:33.778682 containerd[1715]: time="2026-01-17T00:05:33.778575610Z" level=info msg="StartContainer for \"a356cdc456ba13c6ef62f29a3b6a851c7a537efd7233b3cbd5348b914e25a497\" returns successfully" Jan 17 00:05:33.788292 containerd[1715]: time="2026-01-17T00:05:33.788108733Z" level=info msg="StartContainer for \"3cfbb2ee12a0b3ee7db6a58db07a089f4218aba8ed7ea89be45e86f462d75b11\" returns successfully" Jan 17 00:05:33.788292 containerd[1715]: time="2026-01-17T00:05:33.788164533Z" level=info msg="StartContainer for \"065b4e056c64f7a0adf2a7d1acd9570b5e42b7d830707c64d6930a67c5e3bd8b\" returns successfully" Jan 17 00:05:33.794263 kubelet[2878]: E0117 00:05:33.794061 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:33.822241 kubelet[2878]: E0117 00:05:33.822196 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f5e0a482e1&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:05:33.837264 kubelet[2878]: E0117 00:05:33.837191 2878 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:05:33.985593 kubelet[2878]: I0117 00:05:33.985303 2878 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:34.796857 kubelet[2878]: E0117 00:05:34.796815 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:34.801399 kubelet[2878]: E0117 00:05:34.801271 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:34.805398 kubelet[2878]: E0117 00:05:34.805241 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:35.805604 kubelet[2878]: E0117 00:05:35.804969 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:35.805604 kubelet[2878]: E0117 00:05:35.805320 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:35.806197 kubelet[2878]: E0117 00:05:35.806091 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:36.808541 kubelet[2878]: E0117 00:05:36.807123 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:36.984180 kubelet[2878]: E0117 00:05:36.984065 2878 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:37.184115 kubelet[2878]: I0117 00:05:37.183426 2878 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:37.184115 kubelet[2878]: E0117 00:05:37.183463 2878 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-f5e0a482e1\": node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:37.229614 kubelet[2878]: E0117 00:05:37.229569 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:37.330283 kubelet[2878]: E0117 00:05:37.330181 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:37.430980 kubelet[2878]: E0117 00:05:37.430766 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:37.504362 kubelet[2878]: E0117 00:05:37.504199 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:37.530990 kubelet[2878]: E0117 00:05:37.530961 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:37.631675 kubelet[2878]: E0117 00:05:37.631638 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:37.732331 kubelet[2878]: E0117 00:05:37.732291 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:37.832594 kubelet[2878]: E0117 00:05:37.832463 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:37.933491 kubelet[2878]: E0117 00:05:37.933451 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:38.034490 kubelet[2878]: E0117 00:05:38.034454 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:38.135059 kubelet[2878]: E0117 00:05:38.134955 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:38.235785 kubelet[2878]: E0117 00:05:38.235750 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:38.336367 kubelet[2878]: E0117 00:05:38.336334 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:38.437242 kubelet[2878]: E0117 00:05:38.436893 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:38.537059 kubelet[2878]: E0117 00:05:38.537017 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" Jan 17 00:05:38.555416 kubelet[2878]: E0117 00:05:38.555213 2878 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f5e0a482e1\" not found" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:38.621543 kubelet[2878]: I0117 00:05:38.620381 2878 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:38.678194 kubelet[2878]: I0117 00:05:38.678074 2878 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:05:38.678539 kubelet[2878]: I0117 00:05:38.678392 2878 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:38.709317 kubelet[2878]: I0117 00:05:38.709073 2878 apiserver.go:52] "Watching apiserver" Jan 17 00:05:38.722004 kubelet[2878]: I0117 00:05:38.721899 2878 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:05:38.726788 kubelet[2878]: I0117 00:05:38.726724 2878 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:05:38.726868 kubelet[2878]: I0117 00:05:38.726818 2878 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:38.837114 kubelet[2878]: I0117 00:05:38.837081 2878 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:05:39.910016 systemd[1]: Reloading requested from client PID 3168 ('systemctl') (unit session-9.scope)... Jan 17 00:05:39.910033 systemd[1]: Reloading... Jan 17 00:05:40.005567 zram_generator::config[3208]: No configuration found. Jan 17 00:05:40.107969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:05:40.197854 systemd[1]: Reloading finished in 287 ms. Jan 17 00:05:40.238034 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:40.251491 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:05:40.251712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:40.251763 systemd[1]: kubelet.service: Consumed 1.174s CPU time, 129.7M memory peak, 0B memory swap peak. Jan 17 00:05:40.257774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:42.911669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:42.922890 (kubelet)[3272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:05:42.966713 kubelet[3272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:05:42.966713 kubelet[3272]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:05:42.966713 kubelet[3272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:05:42.967594 kubelet[3272]: I0117 00:05:42.967130 3272 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:05:42.974709 kubelet[3272]: I0117 00:05:42.974681 3272 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:05:42.974709 kubelet[3272]: I0117 00:05:42.974703 3272 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:05:42.974917 kubelet[3272]: I0117 00:05:42.974901 3272 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:05:42.978610 kubelet[3272]: I0117 00:05:42.978033 3272 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:05:42.981167 kubelet[3272]: I0117 00:05:42.980909 3272 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:05:42.987488 kubelet[3272]: E0117 00:05:42.987410 3272 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:05:42.987663 kubelet[3272]: I0117 00:05:42.987648 3272 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:05:42.991764 kubelet[3272]: I0117 00:05:42.991738 3272 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:05:42.992026 kubelet[3272]: I0117 00:05:42.991979 3272 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:05:42.992178 kubelet[3272]: I0117 00:05:42.992025 3272 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-f5e0a482e1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:05:42.992312 kubelet[3272]: I0117 00:05:42.992185 3272 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:05:42.992312 kubelet[3272]: I0117 00:05:42.992194 3272 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:05:42.992312 kubelet[3272]: I0117 00:05:42.992237 3272 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:05:42.992490 kubelet[3272]: I0117 00:05:42.992360 3272 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:05:42.992490 kubelet[3272]: I0117 00:05:42.992372 3272 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:05:42.992490 kubelet[3272]: I0117 00:05:42.992392 3272 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:05:42.992490 kubelet[3272]: I0117 00:05:42.992440 3272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:05:43.003454 kubelet[3272]: I0117 00:05:43.002307 3272 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:05:43.005741 kubelet[3272]: I0117 00:05:43.004227 3272 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:05:43.010142 kubelet[3272]: I0117 00:05:43.009106 3272 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:05:43.010142 kubelet[3272]: I0117 00:05:43.009139 3272 server.go:1289] "Started kubelet" Jan 17 00:05:43.016939 kubelet[3272]: I0117 00:05:43.016831 3272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:05:43.031388 kubelet[3272]: I0117 00:05:43.031230 3272 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:05:43.031388 kubelet[3272]: I0117 00:05:43.031325 3272 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:05:43.034092 kubelet[3272]: I0117 00:05:43.033730 3272 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:05:43.036607 kubelet[3272]: I0117 00:05:43.035300 3272 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:05:43.037626 kubelet[3272]: I0117 00:05:43.037611 3272 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:05:43.037803 kubelet[3272]: I0117 00:05:43.037793 3272 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:05:43.038065 kubelet[3272]: I0117 00:05:43.038013 3272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:05:43.038225 kubelet[3272]: I0117 00:05:43.038207 3272 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:05:43.040043 kubelet[3272]: I0117 00:05:43.040014 3272 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:05:43.041968 kubelet[3272]: I0117 00:05:43.041951 3272 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:05:43.042085 kubelet[3272]: I0117 00:05:43.042076 3272 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:05:43.042159 kubelet[3272]: I0117 00:05:43.042150 3272 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:05:43.042207 kubelet[3272]: I0117 00:05:43.042200 3272 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:05:43.042333 kubelet[3272]: E0117 00:05:43.042312 3272 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:05:43.047653 kubelet[3272]: I0117 00:05:43.047628 3272 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:05:43.047802 kubelet[3272]: I0117 00:05:43.047779 3272 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:05:43.049180 kubelet[3272]: E0117 00:05:43.049153 3272 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:05:43.050407 kubelet[3272]: I0117 00:05:43.050386 3272 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:05:43.094007 kubelet[3272]: I0117 00:05:43.093984 3272 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:05:43.094503 kubelet[3272]: I0117 00:05:43.094189 3272 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:05:43.094503 kubelet[3272]: I0117 00:05:43.094211 3272 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:05:43.094503 kubelet[3272]: I0117 00:05:43.094330 3272 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:05:43.094503 kubelet[3272]: I0117 00:05:43.094339 3272 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:05:43.094503 kubelet[3272]: I0117 00:05:43.094355 3272 policy_none.go:49] "None policy: Start" Jan 17 00:05:43.094503 kubelet[3272]: I0117 00:05:43.094364 3272 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:05:43.094503 kubelet[3272]: I0117 00:05:43.094372 3272 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:05:43.094503 kubelet[3272]: I0117 00:05:43.094447 3272 state_mem.go:75] "Updated machine memory state" Jan 17 00:05:43.099290 kubelet[3272]: E0117 00:05:43.098228 3272 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:05:43.099290 kubelet[3272]: I0117 00:05:43.098363 3272 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:05:43.099290 kubelet[3272]: I0117 00:05:43.098375 3272 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:05:43.099290 kubelet[3272]: I0117 00:05:43.098848 3272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:05:43.100250 kubelet[3272]: E0117 00:05:43.100232 3272 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:05:43.143628 kubelet[3272]: I0117 00:05:43.143597 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.144141 kubelet[3272]: I0117 00:05:43.143895 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.144382 kubelet[3272]: I0117 00:05:43.144001 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.155290 kubelet[3272]: I0117 00:05:43.155221 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:05:43.155695 kubelet[3272]: E0117 00:05:43.155515 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-f5e0a482e1\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.155695 kubelet[3272]: I0117 00:05:43.155659 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:05:43.155695 kubelet[3272]: E0117 00:05:43.155695 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-f5e0a482e1\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.156204 kubelet[3272]: I0117 00:05:43.156181 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:05:43.156356 kubelet[3272]: E0117 00:05:43.156297 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.205106 kubelet[3272]: I0117 00:05:43.205082 3272 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.219348 kubelet[3272]: I0117 00:05:43.219313 3272 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.219488 kubelet[3272]: I0117 00:05:43.219401 3272 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.239660 kubelet[3272]: I0117 00:05:43.239629 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.239864 kubelet[3272]: I0117 00:05:43.239732 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ccc2459b2c46195981bb2af963c1a2d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-f5e0a482e1\" (UID: \"5ccc2459b2c46195981bb2af963c1a2d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.239864 kubelet[3272]: I0117 00:05:43.239751 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1476f040af750779544b6696b24ce5d3-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f5e0a482e1\" (UID: \"1476f040af750779544b6696b24ce5d3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.239864 kubelet[3272]: I0117 00:05:43.239766 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1476f040af750779544b6696b24ce5d3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f5e0a482e1\" (UID: \"1476f040af750779544b6696b24ce5d3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.240103 kubelet[3272]: I0117 00:05:43.239971 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1476f040af750779544b6696b24ce5d3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-f5e0a482e1\" (UID: \"1476f040af750779544b6696b24ce5d3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.240103 kubelet[3272]: I0117 00:05:43.240000 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.240103 kubelet[3272]: I0117 00:05:43.240028 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.240103 kubelet[3272]: I0117 00:05:43.240070 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:43.240233 kubelet[3272]: I0117 00:05:43.240092 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c80ac9745f453803a11deda78f4aaf9-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f5e0a482e1\" (UID: \"9c80ac9745f453803a11deda78f4aaf9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:44.002602 kubelet[3272]: I0117 00:05:44.002299 3272 apiserver.go:52] "Watching apiserver" Jan 17 00:05:46.979762 kubelet[3272]: I0117 00:05:44.038187 3272 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:05:46.979762 kubelet[3272]: I0117 00:05:44.075104 3272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:46.979762 kubelet[3272]: I0117 00:05:44.083053 3272 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:05:46.979762 kubelet[3272]: E0117 00:05:44.083116 3272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-f5e0a482e1\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" Jan 17 00:05:46.979762 kubelet[3272]: I0117 00:05:44.095334 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f5e0a482e1" podStartSLOduration=6.095319327 podStartE2EDuration="6.095319327s" podCreationTimestamp="2026-01-17 00:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:05:44.095151287 +0000 UTC m=+1.168097169" watchObservedRunningTime="2026-01-17 00:05:44.095319327 +0000 UTC m=+1.168265209" Jan 17 00:05:46.979762 kubelet[3272]: I0117 00:05:44.120167 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f5e0a482e1" podStartSLOduration=6.120150865 podStartE2EDuration="6.120150865s" podCreationTimestamp="2026-01-17 00:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:05:44.106630215 +0000 UTC m=+1.179576097" watchObservedRunningTime="2026-01-17 00:05:44.120150865 +0000 UTC m=+1.193096747" Jan 17 00:05:46.981809 containerd[1715]: time="2026-01-17T00:05:46.310346724Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:05:46.982047 kubelet[3272]: I0117 00:05:44.131006 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f5e0a482e1" podStartSLOduration=6.130991592 podStartE2EDuration="6.130991592s" podCreationTimestamp="2026-01-17 00:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:05:44.120826825 +0000 UTC m=+1.193772707" watchObservedRunningTime="2026-01-17 00:05:44.130991592 +0000 UTC m=+1.203937474" Jan 17 00:05:46.982047 kubelet[3272]: I0117 00:05:46.310097 3272 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:05:46.982047 kubelet[3272]: I0117 00:05:46.310495 3272 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:05:46.998218 systemd[1]: Created slice kubepods-besteffort-podfb55db0c_0353_4ca4_9ef6_3eb2094e37f9.slice - libcontainer container kubepods-besteffort-podfb55db0c_0353_4ca4_9ef6_3eb2094e37f9.slice. Jan 17 00:05:47.060682 kubelet[3272]: I0117 00:05:47.060517 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb55db0c-0353-4ca4-9ef6-3eb2094e37f9-kube-proxy\") pod \"kube-proxy-rw98h\" (UID: \"fb55db0c-0353-4ca4-9ef6-3eb2094e37f9\") " pod="kube-system/kube-proxy-rw98h" Jan 17 00:05:47.060682 kubelet[3272]: I0117 00:05:47.060559 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb55db0c-0353-4ca4-9ef6-3eb2094e37f9-xtables-lock\") pod \"kube-proxy-rw98h\" (UID: \"fb55db0c-0353-4ca4-9ef6-3eb2094e37f9\") " pod="kube-system/kube-proxy-rw98h" Jan 17 00:05:47.060682 kubelet[3272]: I0117 00:05:47.060577 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb55db0c-0353-4ca4-9ef6-3eb2094e37f9-lib-modules\") pod \"kube-proxy-rw98h\" (UID: \"fb55db0c-0353-4ca4-9ef6-3eb2094e37f9\") " pod="kube-system/kube-proxy-rw98h" Jan 17 00:05:47.060682 kubelet[3272]: I0117 00:05:47.060593 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qrv9\" (UniqueName: \"kubernetes.io/projected/fb55db0c-0353-4ca4-9ef6-3eb2094e37f9-kube-api-access-2qrv9\") pod \"kube-proxy-rw98h\" (UID: \"fb55db0c-0353-4ca4-9ef6-3eb2094e37f9\") " pod="kube-system/kube-proxy-rw98h" Jan 17 00:05:47.306054 containerd[1715]: time="2026-01-17T00:05:47.305420543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rw98h,Uid:fb55db0c-0353-4ca4-9ef6-3eb2094e37f9,Namespace:kube-system,Attempt:0,}" Jan 17 00:05:47.349225 containerd[1715]: time="2026-01-17T00:05:47.349068854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:05:47.350890 containerd[1715]: time="2026-01-17T00:05:47.349615774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:05:47.350890 containerd[1715]: time="2026-01-17T00:05:47.350604135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:47.351191 containerd[1715]: time="2026-01-17T00:05:47.350822535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:47.377844 systemd[1]: Started cri-containerd-5d84daef5eacdf8d1cdd90006cd55a62697c552eba300bfb06cf799f29222245.scope - libcontainer container 5d84daef5eacdf8d1cdd90006cd55a62697c552eba300bfb06cf799f29222245. Jan 17 00:05:47.408717 containerd[1715]: time="2026-01-17T00:05:47.408676976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rw98h,Uid:fb55db0c-0353-4ca4-9ef6-3eb2094e37f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d84daef5eacdf8d1cdd90006cd55a62697c552eba300bfb06cf799f29222245\"" Jan 17 00:05:47.417983 containerd[1715]: time="2026-01-17T00:05:47.417943302Z" level=info msg="CreateContainer within sandbox \"5d84daef5eacdf8d1cdd90006cd55a62697c552eba300bfb06cf799f29222245\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:05:47.719145 containerd[1715]: time="2026-01-17T00:05:47.719089354Z" level=info msg="CreateContainer within sandbox \"5d84daef5eacdf8d1cdd90006cd55a62697c552eba300bfb06cf799f29222245\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"baff46e450cc502268d482b69250b1e8c113578a4bb2dae3058e0f194d15b403\"" Jan 17 00:05:47.719780 containerd[1715]: time="2026-01-17T00:05:47.719745234Z" level=info msg="StartContainer for \"baff46e450cc502268d482b69250b1e8c113578a4bb2dae3058e0f194d15b403\"" Jan 17 00:05:47.742681 systemd[1]: Started cri-containerd-baff46e450cc502268d482b69250b1e8c113578a4bb2dae3058e0f194d15b403.scope - libcontainer container baff46e450cc502268d482b69250b1e8c113578a4bb2dae3058e0f194d15b403. Jan 17 00:05:47.771921 containerd[1715]: time="2026-01-17T00:05:47.771784831Z" level=info msg="StartContainer for \"baff46e450cc502268d482b69250b1e8c113578a4bb2dae3058e0f194d15b403\" returns successfully" Jan 17 00:05:48.179432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986960986.mount: Deactivated successfully. Jan 17 00:05:48.398267 kubelet[3272]: I0117 00:05:48.398105 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rw98h" podStartSLOduration=2.398075591 podStartE2EDuration="2.398075591s" podCreationTimestamp="2026-01-17 00:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:05:48.100712702 +0000 UTC m=+5.173658584" watchObservedRunningTime="2026-01-17 00:05:48.398075591 +0000 UTC m=+5.471021473" Jan 17 00:05:48.417995 systemd[1]: Created slice kubepods-besteffort-pod45a79632_c248_4ffd_9f9d_2f33ecca5416.slice - libcontainer container kubepods-besteffort-pod45a79632_c248_4ffd_9f9d_2f33ecca5416.slice. Jan 17 00:05:48.469059 kubelet[3272]: I0117 00:05:48.469013 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9lz\" (UniqueName: \"kubernetes.io/projected/45a79632-c248-4ffd-9f9d-2f33ecca5416-kube-api-access-sq9lz\") pod \"tigera-operator-7dcd859c48-l8842\" (UID: \"45a79632-c248-4ffd-9f9d-2f33ecca5416\") " pod="tigera-operator/tigera-operator-7dcd859c48-l8842" Jan 17 00:05:48.469310 kubelet[3272]: I0117 00:05:48.469244 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/45a79632-c248-4ffd-9f9d-2f33ecca5416-var-lib-calico\") pod \"tigera-operator-7dcd859c48-l8842\" (UID: \"45a79632-c248-4ffd-9f9d-2f33ecca5416\") " pod="tigera-operator/tigera-operator-7dcd859c48-l8842" Jan 17 00:05:48.722789 containerd[1715]: time="2026-01-17T00:05:48.722667539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l8842,Uid:45a79632-c248-4ffd-9f9d-2f33ecca5416,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:05:48.764379 containerd[1715]: time="2026-01-17T00:05:48.764250248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:05:48.764515 containerd[1715]: time="2026-01-17T00:05:48.764400448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:05:48.764515 containerd[1715]: time="2026-01-17T00:05:48.764436488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:48.764711 containerd[1715]: time="2026-01-17T00:05:48.764600048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:05:48.786776 systemd[1]: Started cri-containerd-f3aa89df8c8a80c09e40bd3310f1204c435aebd208d9b002e6a6dea04e127e88.scope - libcontainer container f3aa89df8c8a80c09e40bd3310f1204c435aebd208d9b002e6a6dea04e127e88. Jan 17 00:05:48.813504 containerd[1715]: time="2026-01-17T00:05:48.813413923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l8842,Uid:45a79632-c248-4ffd-9f9d-2f33ecca5416,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f3aa89df8c8a80c09e40bd3310f1204c435aebd208d9b002e6a6dea04e127e88\"" Jan 17 00:05:48.815305 containerd[1715]: time="2026-01-17T00:05:48.815083244Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:05:51.463599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796921843.mount: Deactivated successfully. Jan 17 00:05:51.855617 containerd[1715]: time="2026-01-17T00:05:51.854837797Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:51.857068 containerd[1715]: time="2026-01-17T00:05:51.857041877Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 17 00:05:51.859375 containerd[1715]: time="2026-01-17T00:05:51.859329318Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:51.862824 containerd[1715]: time="2026-01-17T00:05:51.862768399Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:05:51.863661 containerd[1715]: time="2026-01-17T00:05:51.863553560Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.048429676s" Jan 17 00:05:51.863661 containerd[1715]: time="2026-01-17T00:05:51.863583520Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 17 00:05:51.870915 containerd[1715]: time="2026-01-17T00:05:51.870884522Z" level=info msg="CreateContainer within sandbox \"f3aa89df8c8a80c09e40bd3310f1204c435aebd208d9b002e6a6dea04e127e88\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:05:51.978922 containerd[1715]: time="2026-01-17T00:05:51.978878000Z" level=info msg="CreateContainer within sandbox \"f3aa89df8c8a80c09e40bd3310f1204c435aebd208d9b002e6a6dea04e127e88\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ff686be4a365e2439cd82761af6084b0ed9d3c6535998fab6e3f690ea5c083ff\"" Jan 17 00:05:51.981349 containerd[1715]: time="2026-01-17T00:05:51.979395320Z" level=info msg="StartContainer for \"ff686be4a365e2439cd82761af6084b0ed9d3c6535998fab6e3f690ea5c083ff\"" Jan 17 00:05:52.009690 systemd[1]: Started cri-containerd-ff686be4a365e2439cd82761af6084b0ed9d3c6535998fab6e3f690ea5c083ff.scope - libcontainer container ff686be4a365e2439cd82761af6084b0ed9d3c6535998fab6e3f690ea5c083ff. Jan 17 00:05:52.034851 containerd[1715]: time="2026-01-17T00:05:52.034813499Z" level=info msg="StartContainer for \"ff686be4a365e2439cd82761af6084b0ed9d3c6535998fab6e3f690ea5c083ff\" returns successfully" Jan 17 00:05:52.655036 kubelet[3272]: I0117 00:05:52.654435 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-l8842" podStartSLOduration=1.6046684789999999 podStartE2EDuration="4.654419115s" podCreationTimestamp="2026-01-17 00:05:48 +0000 UTC" firstStartedPulling="2026-01-17 00:05:48.814710204 +0000 UTC m=+5.887656086" lastFinishedPulling="2026-01-17 00:05:51.86446084 +0000 UTC m=+8.937406722" observedRunningTime="2026-01-17 00:05:52.105077404 +0000 UTC m=+9.178023286" watchObservedRunningTime="2026-01-17 00:05:52.654419115 +0000 UTC m=+9.727364957" Jan 17 00:05:57.805048 sudo[2222]: pam_unix(sudo:session): session closed for user root Jan 17 00:05:57.883902 sshd[2219]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:57.890523 systemd-logind[1696]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:05:57.891136 systemd[1]: sshd@6-10.200.20.17:22-10.200.16.10:56226.service: Deactivated successfully. Jan 17 00:05:57.894738 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:05:57.895112 systemd[1]: session-9.scope: Consumed 6.944s CPU time, 150.0M memory peak, 0B memory swap peak. Jan 17 00:05:57.897149 systemd-logind[1696]: Removed session 9. Jan 17 00:06:08.255227 systemd[1]: Created slice kubepods-besteffort-pod3861508e_2510_478e_a625_e37f51b6f6ed.slice - libcontainer container kubepods-besteffort-pod3861508e_2510_478e_a625_e37f51b6f6ed.slice. Jan 17 00:06:08.287028 kubelet[3272]: I0117 00:06:08.286973 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3861508e-2510-478e-a625-e37f51b6f6ed-typha-certs\") pod \"calico-typha-fd96d5d65-8n99j\" (UID: \"3861508e-2510-478e-a625-e37f51b6f6ed\") " pod="calico-system/calico-typha-fd96d5d65-8n99j" Jan 17 00:06:08.287028 kubelet[3272]: I0117 00:06:08.287013 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfbd\" (UniqueName: \"kubernetes.io/projected/3861508e-2510-478e-a625-e37f51b6f6ed-kube-api-access-9gfbd\") pod \"calico-typha-fd96d5d65-8n99j\" (UID: \"3861508e-2510-478e-a625-e37f51b6f6ed\") " pod="calico-system/calico-typha-fd96d5d65-8n99j" Jan 17 00:06:08.287028 kubelet[3272]: I0117 00:06:08.287034 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3861508e-2510-478e-a625-e37f51b6f6ed-tigera-ca-bundle\") pod \"calico-typha-fd96d5d65-8n99j\" (UID: \"3861508e-2510-478e-a625-e37f51b6f6ed\") " pod="calico-system/calico-typha-fd96d5d65-8n99j" Jan 17 00:06:08.476808 systemd[1]: Created slice kubepods-besteffort-pod0ca9a2a0_5424_40c1_be34_595833901b76.slice - libcontainer container kubepods-besteffort-pod0ca9a2a0_5424_40c1_be34_595833901b76.slice. Jan 17 00:06:08.487425 kubelet[3272]: I0117 00:06:08.487384 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0ca9a2a0-5424-40c1-be34-595833901b76-policysync\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.487425 kubelet[3272]: I0117 00:06:08.487426 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0ca9a2a0-5424-40c1-be34-595833901b76-cni-log-dir\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.487668 kubelet[3272]: I0117 00:06:08.487443 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0ca9a2a0-5424-40c1-be34-595833901b76-cni-net-dir\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.487668 kubelet[3272]: I0117 00:06:08.487459 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0ca9a2a0-5424-40c1-be34-595833901b76-node-certs\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.487668 kubelet[3272]: I0117 00:06:08.487477 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr4t4\" (UniqueName: \"kubernetes.io/projected/0ca9a2a0-5424-40c1-be34-595833901b76-kube-api-access-hr4t4\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.487970 kubelet[3272]: I0117 00:06:08.487754 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0ca9a2a0-5424-40c1-be34-595833901b76-flexvol-driver-host\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.487970 kubelet[3272]: I0117 00:06:08.487811 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ca9a2a0-5424-40c1-be34-595833901b76-xtables-lock\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.487970 kubelet[3272]: I0117 00:06:08.487847 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0ca9a2a0-5424-40c1-be34-595833901b76-var-run-calico\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.487970 kubelet[3272]: I0117 00:06:08.487863 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ca9a2a0-5424-40c1-be34-595833901b76-tigera-ca-bundle\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.487970 kubelet[3272]: I0117 00:06:08.487889 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0ca9a2a0-5424-40c1-be34-595833901b76-cni-bin-dir\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.488102 kubelet[3272]: I0117 00:06:08.487905 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ca9a2a0-5424-40c1-be34-595833901b76-lib-modules\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.488102 kubelet[3272]: I0117 00:06:08.487921 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ca9a2a0-5424-40c1-be34-595833901b76-var-lib-calico\") pod \"calico-node-26k4t\" (UID: \"0ca9a2a0-5424-40c1-be34-595833901b76\") " pod="calico-system/calico-node-26k4t" Jan 17 00:06:08.562227 containerd[1715]: time="2026-01-17T00:06:08.561810000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd96d5d65-8n99j,Uid:3861508e-2510-478e-a625-e37f51b6f6ed,Namespace:calico-system,Attempt:0,}" Jan 17 00:06:08.592823 kubelet[3272]: E0117 00:06:08.591672 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.592823 kubelet[3272]: W0117 00:06:08.591696 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.592823 kubelet[3272]: E0117 00:06:08.591715 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.598677 kubelet[3272]: E0117 00:06:08.598659 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.598906 kubelet[3272]: W0117 00:06:08.598769 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.598906 kubelet[3272]: E0117 00:06:08.598797 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.598997 containerd[1715]: time="2026-01-17T00:06:08.598551902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:08.598997 containerd[1715]: time="2026-01-17T00:06:08.598602302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:08.598997 containerd[1715]: time="2026-01-17T00:06:08.598617382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:08.598997 containerd[1715]: time="2026-01-17T00:06:08.598742062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:08.599152 kubelet[3272]: E0117 00:06:08.599139 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.599334 kubelet[3272]: W0117 00:06:08.599185 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.599334 kubelet[3272]: E0117 00:06:08.599199 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.599466 kubelet[3272]: E0117 00:06:08.599454 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.599669 kubelet[3272]: W0117 00:06:08.599504 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.599669 kubelet[3272]: E0117 00:06:08.599519 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.599779 kubelet[3272]: E0117 00:06:08.599768 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.599937 kubelet[3272]: W0117 00:06:08.599819 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.599937 kubelet[3272]: E0117 00:06:08.599833 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.602128 kubelet[3272]: E0117 00:06:08.602025 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.602739 kubelet[3272]: W0117 00:06:08.602461 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.602739 kubelet[3272]: E0117 00:06:08.602482 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.603755 kubelet[3272]: E0117 00:06:08.603496 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.603755 kubelet[3272]: W0117 00:06:08.603513 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.603755 kubelet[3272]: E0117 00:06:08.603531 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.604646 kubelet[3272]: E0117 00:06:08.604210 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.604646 kubelet[3272]: W0117 00:06:08.604224 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.604646 kubelet[3272]: E0117 00:06:08.604235 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.605386 kubelet[3272]: E0117 00:06:08.605023 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.605386 kubelet[3272]: W0117 00:06:08.605036 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.605386 kubelet[3272]: E0117 00:06:08.605047 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.605857 kubelet[3272]: E0117 00:06:08.605742 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.606045 kubelet[3272]: W0117 00:06:08.606005 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.606045 kubelet[3272]: E0117 00:06:08.606024 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.618913 kubelet[3272]: E0117 00:06:08.618761 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.618913 kubelet[3272]: W0117 00:06:08.618780 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.618913 kubelet[3272]: E0117 00:06:08.618797 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.630704 systemd[1]: Started cri-containerd-eb1cda9077e3fbfaa8b94b85a01206efeded8d2bf431741311f120c25cca0fe3.scope - libcontainer container eb1cda9077e3fbfaa8b94b85a01206efeded8d2bf431741311f120c25cca0fe3. Jan 17 00:06:08.669550 kubelet[3272]: E0117 00:06:08.668973 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:08.674925 kubelet[3272]: E0117 00:06:08.674890 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.674925 kubelet[3272]: W0117 00:06:08.674910 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.674925 kubelet[3272]: E0117 00:06:08.674927 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.675539 kubelet[3272]: E0117 00:06:08.675503 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.675611 kubelet[3272]: W0117 00:06:08.675521 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.675611 kubelet[3272]: E0117 00:06:08.675578 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.675748 kubelet[3272]: E0117 00:06:08.675731 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.675748 kubelet[3272]: W0117 00:06:08.675743 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.675806 kubelet[3272]: E0117 00:06:08.675751 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.675908 kubelet[3272]: E0117 00:06:08.675884 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.675908 kubelet[3272]: W0117 00:06:08.675895 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.675908 kubelet[3272]: E0117 00:06:08.675903 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.676580 kubelet[3272]: E0117 00:06:08.676049 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.676580 kubelet[3272]: W0117 00:06:08.676055 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.676580 kubelet[3272]: E0117 00:06:08.676065 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.676580 kubelet[3272]: E0117 00:06:08.676273 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.676580 kubelet[3272]: W0117 00:06:08.676283 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.676580 kubelet[3272]: E0117 00:06:08.676292 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.676782 kubelet[3272]: E0117 00:06:08.676594 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.676782 kubelet[3272]: W0117 00:06:08.676604 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.676782 kubelet[3272]: E0117 00:06:08.676614 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.676948 kubelet[3272]: E0117 00:06:08.676928 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.676948 kubelet[3272]: W0117 00:06:08.676945 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.677019 kubelet[3272]: E0117 00:06:08.676958 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.677141 kubelet[3272]: E0117 00:06:08.677124 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.677141 kubelet[3272]: W0117 00:06:08.677137 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.677197 kubelet[3272]: E0117 00:06:08.677146 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.677539 kubelet[3272]: E0117 00:06:08.677473 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.677539 kubelet[3272]: W0117 00:06:08.677490 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.677539 kubelet[3272]: E0117 00:06:08.677501 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.677873 kubelet[3272]: E0117 00:06:08.677847 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.677873 kubelet[3272]: W0117 00:06:08.677863 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.677873 kubelet[3272]: E0117 00:06:08.677874 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.678111 kubelet[3272]: E0117 00:06:08.678097 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.678111 kubelet[3272]: W0117 00:06:08.678110 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.678178 kubelet[3272]: E0117 00:06:08.678121 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.678431 kubelet[3272]: E0117 00:06:08.678403 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.678431 kubelet[3272]: W0117 00:06:08.678419 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.678431 kubelet[3272]: E0117 00:06:08.678430 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.678794 kubelet[3272]: E0117 00:06:08.678775 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.678794 kubelet[3272]: W0117 00:06:08.678791 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.678868 kubelet[3272]: E0117 00:06:08.678802 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.679201 kubelet[3272]: E0117 00:06:08.679176 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.679201 kubelet[3272]: W0117 00:06:08.679196 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.679282 kubelet[3272]: E0117 00:06:08.679209 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.679393 kubelet[3272]: E0117 00:06:08.679367 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.679393 kubelet[3272]: W0117 00:06:08.679379 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.679393 kubelet[3272]: E0117 00:06:08.679391 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.680550 kubelet[3272]: E0117 00:06:08.680521 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.680550 kubelet[3272]: W0117 00:06:08.680541 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.680642 kubelet[3272]: E0117 00:06:08.680556 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.680757 kubelet[3272]: E0117 00:06:08.680743 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.680757 kubelet[3272]: W0117 00:06:08.680755 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.680817 kubelet[3272]: E0117 00:06:08.680764 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.680995 kubelet[3272]: E0117 00:06:08.680972 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.680995 kubelet[3272]: W0117 00:06:08.680989 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.681078 kubelet[3272]: E0117 00:06:08.680999 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.681415 kubelet[3272]: E0117 00:06:08.681396 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.681415 kubelet[3272]: W0117 00:06:08.681411 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.681605 kubelet[3272]: E0117 00:06:08.681423 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.690952 kubelet[3272]: E0117 00:06:08.690292 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.690952 kubelet[3272]: W0117 00:06:08.690315 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.690952 kubelet[3272]: E0117 00:06:08.690332 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.690952 kubelet[3272]: I0117 00:06:08.690371 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8214a0c3-a0f7-40b6-915d-08cea6de347e-socket-dir\") pod \"csi-node-driver-z7gm8\" (UID: \"8214a0c3-a0f7-40b6-915d-08cea6de347e\") " pod="calico-system/csi-node-driver-z7gm8" Jan 17 00:06:08.690952 kubelet[3272]: E0117 00:06:08.690635 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.690952 kubelet[3272]: W0117 00:06:08.690646 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.690952 kubelet[3272]: E0117 00:06:08.690656 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.690952 kubelet[3272]: I0117 00:06:08.690699 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8214a0c3-a0f7-40b6-915d-08cea6de347e-varrun\") pod \"csi-node-driver-z7gm8\" (UID: \"8214a0c3-a0f7-40b6-915d-08cea6de347e\") " pod="calico-system/csi-node-driver-z7gm8" Jan 17 00:06:08.690952 kubelet[3272]: E0117 00:06:08.690938 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.691232 kubelet[3272]: W0117 00:06:08.690948 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.691232 kubelet[3272]: E0117 00:06:08.690958 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.691232 kubelet[3272]: I0117 00:06:08.691000 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jqtl\" (UniqueName: \"kubernetes.io/projected/8214a0c3-a0f7-40b6-915d-08cea6de347e-kube-api-access-7jqtl\") pod \"csi-node-driver-z7gm8\" (UID: \"8214a0c3-a0f7-40b6-915d-08cea6de347e\") " pod="calico-system/csi-node-driver-z7gm8" Jan 17 00:06:08.691232 kubelet[3272]: E0117 00:06:08.691227 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.691316 kubelet[3272]: W0117 00:06:08.691235 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.691316 kubelet[3272]: E0117 00:06:08.691244 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.691442 kubelet[3272]: E0117 00:06:08.691423 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.691442 kubelet[3272]: W0117 00:06:08.691437 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.691497 kubelet[3272]: E0117 00:06:08.691446 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.692017 kubelet[3272]: E0117 00:06:08.691662 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.692017 kubelet[3272]: W0117 00:06:08.691675 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.692017 kubelet[3272]: E0117 00:06:08.691684 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.692017 kubelet[3272]: E0117 00:06:08.691912 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.692017 kubelet[3272]: W0117 00:06:08.691921 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.692017 kubelet[3272]: E0117 00:06:08.691930 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.692185 kubelet[3272]: E0117 00:06:08.692140 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.692185 kubelet[3272]: W0117 00:06:08.692149 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.692185 kubelet[3272]: E0117 00:06:08.692159 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.692251 kubelet[3272]: I0117 00:06:08.692188 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8214a0c3-a0f7-40b6-915d-08cea6de347e-kubelet-dir\") pod \"csi-node-driver-z7gm8\" (UID: \"8214a0c3-a0f7-40b6-915d-08cea6de347e\") " pod="calico-system/csi-node-driver-z7gm8" Jan 17 00:06:08.692413 kubelet[3272]: E0117 00:06:08.692391 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.692413 kubelet[3272]: W0117 00:06:08.692407 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.692468 kubelet[3272]: E0117 00:06:08.692417 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.692547 kubelet[3272]: I0117 00:06:08.692473 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8214a0c3-a0f7-40b6-915d-08cea6de347e-registration-dir\") pod \"csi-node-driver-z7gm8\" (UID: \"8214a0c3-a0f7-40b6-915d-08cea6de347e\") " pod="calico-system/csi-node-driver-z7gm8" Jan 17 00:06:08.692660 kubelet[3272]: E0117 00:06:08.692645 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.692660 kubelet[3272]: W0117 00:06:08.692656 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.693650 kubelet[3272]: E0117 00:06:08.692666 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.693650 kubelet[3272]: E0117 00:06:08.692855 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.693650 kubelet[3272]: W0117 00:06:08.692864 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.693650 kubelet[3272]: E0117 00:06:08.692873 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.693650 kubelet[3272]: E0117 00:06:08.693036 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.693650 kubelet[3272]: W0117 00:06:08.693043 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.693650 kubelet[3272]: E0117 00:06:08.693062 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.693650 kubelet[3272]: E0117 00:06:08.693225 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.693650 kubelet[3272]: W0117 00:06:08.693233 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.693650 kubelet[3272]: E0117 00:06:08.693241 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.693867 kubelet[3272]: E0117 00:06:08.693406 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.693867 kubelet[3272]: W0117 00:06:08.693414 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.693867 kubelet[3272]: E0117 00:06:08.693422 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.693867 kubelet[3272]: E0117 00:06:08.693600 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.693867 kubelet[3272]: W0117 00:06:08.693617 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.693867 kubelet[3272]: E0117 00:06:08.693629 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.696583 containerd[1715]: time="2026-01-17T00:06:08.696547200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd96d5d65-8n99j,Uid:3861508e-2510-478e-a625-e37f51b6f6ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb1cda9077e3fbfaa8b94b85a01206efeded8d2bf431741311f120c25cca0fe3\"" Jan 17 00:06:08.700178 containerd[1715]: time="2026-01-17T00:06:08.700148562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:06:08.780664 containerd[1715]: time="2026-01-17T00:06:08.780619209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-26k4t,Uid:0ca9a2a0-5424-40c1-be34-595833901b76,Namespace:calico-system,Attempt:0,}" Jan 17 00:06:08.792932 kubelet[3272]: E0117 00:06:08.792900 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.792932 kubelet[3272]: W0117 00:06:08.792921 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.793198 kubelet[3272]: E0117 00:06:08.792948 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.793314 kubelet[3272]: E0117 00:06:08.793298 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.793314 kubelet[3272]: W0117 00:06:08.793312 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.793493 kubelet[3272]: E0117 00:06:08.793326 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.793693 kubelet[3272]: E0117 00:06:08.793675 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.795471 kubelet[3272]: W0117 00:06:08.793740 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.795471 kubelet[3272]: E0117 00:06:08.793757 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.795471 kubelet[3272]: E0117 00:06:08.793934 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.795471 kubelet[3272]: W0117 00:06:08.793944 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.795471 kubelet[3272]: E0117 00:06:08.793952 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.795471 kubelet[3272]: E0117 00:06:08.794124 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.795471 kubelet[3272]: W0117 00:06:08.794132 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.795471 kubelet[3272]: E0117 00:06:08.794140 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.795471 kubelet[3272]: E0117 00:06:08.794349 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.795471 kubelet[3272]: W0117 00:06:08.794362 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.795725 kubelet[3272]: E0117 00:06:08.794373 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.795725 kubelet[3272]: E0117 00:06:08.794566 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.795725 kubelet[3272]: W0117 00:06:08.794576 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.795725 kubelet[3272]: E0117 00:06:08.794587 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.795725 kubelet[3272]: E0117 00:06:08.794750 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.795725 kubelet[3272]: W0117 00:06:08.794758 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.795725 kubelet[3272]: E0117 00:06:08.794766 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.795725 kubelet[3272]: E0117 00:06:08.794897 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.795725 kubelet[3272]: W0117 00:06:08.794904 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.795725 kubelet[3272]: E0117 00:06:08.794911 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.795923 kubelet[3272]: E0117 00:06:08.795084 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.795923 kubelet[3272]: W0117 00:06:08.795093 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.795923 kubelet[3272]: E0117 00:06:08.795101 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.795923 kubelet[3272]: E0117 00:06:08.795272 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.795923 kubelet[3272]: W0117 00:06:08.795280 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.795923 kubelet[3272]: E0117 00:06:08.795288 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.796250 kubelet[3272]: E0117 00:06:08.796111 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.796250 kubelet[3272]: W0117 00:06:08.796124 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.796250 kubelet[3272]: E0117 00:06:08.796136 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.796783 kubelet[3272]: E0117 00:06:08.796326 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.796783 kubelet[3272]: W0117 00:06:08.796335 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.796783 kubelet[3272]: E0117 00:06:08.796347 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.796783 kubelet[3272]: E0117 00:06:08.796643 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.796783 kubelet[3272]: W0117 00:06:08.796658 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.796783 kubelet[3272]: E0117 00:06:08.796669 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.797082 kubelet[3272]: E0117 00:06:08.796981 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.797082 kubelet[3272]: W0117 00:06:08.796992 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.797082 kubelet[3272]: E0117 00:06:08.797002 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.797261 kubelet[3272]: E0117 00:06:08.797230 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.797261 kubelet[3272]: W0117 00:06:08.797241 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.797261 kubelet[3272]: E0117 00:06:08.797251 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.797611 kubelet[3272]: E0117 00:06:08.797505 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.797611 kubelet[3272]: W0117 00:06:08.797516 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.797611 kubelet[3272]: E0117 00:06:08.797538 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.798225 kubelet[3272]: E0117 00:06:08.798105 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.798225 kubelet[3272]: W0117 00:06:08.798119 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.798225 kubelet[3272]: E0117 00:06:08.798131 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.798418 kubelet[3272]: E0117 00:06:08.798385 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.798418 kubelet[3272]: W0117 00:06:08.798397 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.798418 kubelet[3272]: E0117 00:06:08.798407 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.798842 kubelet[3272]: E0117 00:06:08.798745 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.798842 kubelet[3272]: W0117 00:06:08.798757 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.798842 kubelet[3272]: E0117 00:06:08.798767 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.799149 kubelet[3272]: E0117 00:06:08.799012 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.799149 kubelet[3272]: W0117 00:06:08.799022 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.799149 kubelet[3272]: E0117 00:06:08.799033 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.799265 kubelet[3272]: E0117 00:06:08.799249 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.799265 kubelet[3272]: W0117 00:06:08.799262 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.799321 kubelet[3272]: E0117 00:06:08.799274 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.799420 kubelet[3272]: E0117 00:06:08.799398 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.799420 kubelet[3272]: W0117 00:06:08.799410 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.799475 kubelet[3272]: E0117 00:06:08.799421 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.799605 kubelet[3272]: E0117 00:06:08.799590 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.799652 kubelet[3272]: W0117 00:06:08.799604 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.799652 kubelet[3272]: E0117 00:06:08.799615 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.800662 kubelet[3272]: E0117 00:06:08.800630 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.800662 kubelet[3272]: W0117 00:06:08.800651 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.800773 kubelet[3272]: E0117 00:06:08.800674 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.810564 kubelet[3272]: E0117 00:06:08.810539 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:08.810564 kubelet[3272]: W0117 00:06:08.810559 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:08.810670 kubelet[3272]: E0117 00:06:08.810576 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:08.816878 containerd[1715]: time="2026-01-17T00:06:08.816619150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:08.816878 containerd[1715]: time="2026-01-17T00:06:08.816731271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:08.817139 containerd[1715]: time="2026-01-17T00:06:08.816753751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:08.817139 containerd[1715]: time="2026-01-17T00:06:08.817082191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:08.840691 systemd[1]: Started cri-containerd-3811d69cc81929dc909bfa89150e1828f1f090f13f4593649c120eaf5030318b.scope - libcontainer container 3811d69cc81929dc909bfa89150e1828f1f090f13f4593649c120eaf5030318b. Jan 17 00:06:08.860038 containerd[1715]: time="2026-01-17T00:06:08.859925376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-26k4t,Uid:0ca9a2a0-5424-40c1-be34-595833901b76,Namespace:calico-system,Attempt:0,} returns sandbox id \"3811d69cc81929dc909bfa89150e1828f1f090f13f4593649c120eaf5030318b\"" Jan 17 00:06:10.145625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468242521.mount: Deactivated successfully. Jan 17 00:06:11.042877 kubelet[3272]: E0117 00:06:11.042685 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:11.523192 containerd[1715]: time="2026-01-17T00:06:11.523069829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:11.525460 containerd[1715]: time="2026-01-17T00:06:11.525283471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 17 00:06:11.530311 containerd[1715]: time="2026-01-17T00:06:11.530262754Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:11.533909 containerd[1715]: time="2026-01-17T00:06:11.533881436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:11.534669 containerd[1715]: time="2026-01-17T00:06:11.534552356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.834370434s" Jan 17 00:06:11.534669 containerd[1715]: time="2026-01-17T00:06:11.534582596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 17 00:06:11.536641 containerd[1715]: time="2026-01-17T00:06:11.536108077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:06:11.554076 containerd[1715]: time="2026-01-17T00:06:11.554041328Z" level=info msg="CreateContainer within sandbox \"eb1cda9077e3fbfaa8b94b85a01206efeded8d2bf431741311f120c25cca0fe3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:06:11.585504 containerd[1715]: time="2026-01-17T00:06:11.585374786Z" level=info msg="CreateContainer within sandbox \"eb1cda9077e3fbfaa8b94b85a01206efeded8d2bf431741311f120c25cca0fe3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d8959080152c923ad2d69433b9fc9393313ac4ebc9a1d6247ca72e054c71c1ed\"" Jan 17 00:06:11.587374 containerd[1715]: time="2026-01-17T00:06:11.585975547Z" level=info msg="StartContainer for \"d8959080152c923ad2d69433b9fc9393313ac4ebc9a1d6247ca72e054c71c1ed\"" Jan 17 00:06:11.621084 systemd[1]: Started cri-containerd-d8959080152c923ad2d69433b9fc9393313ac4ebc9a1d6247ca72e054c71c1ed.scope - libcontainer container d8959080152c923ad2d69433b9fc9393313ac4ebc9a1d6247ca72e054c71c1ed. Jan 17 00:06:11.654266 containerd[1715]: time="2026-01-17T00:06:11.654161347Z" level=info msg="StartContainer for \"d8959080152c923ad2d69433b9fc9393313ac4ebc9a1d6247ca72e054c71c1ed\" returns successfully" Jan 17 00:06:12.207195 kubelet[3272]: E0117 00:06:12.207166 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.207649 kubelet[3272]: W0117 00:06:12.207517 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.207649 kubelet[3272]: E0117 00:06:12.207561 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.207926 kubelet[3272]: E0117 00:06:12.207740 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.207926 kubelet[3272]: W0117 00:06:12.207749 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.207926 kubelet[3272]: E0117 00:06:12.207806 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.208241 kubelet[3272]: E0117 00:06:12.208131 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.208241 kubelet[3272]: W0117 00:06:12.208143 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.208241 kubelet[3272]: E0117 00:06:12.208157 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.208468 kubelet[3272]: E0117 00:06:12.208397 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.208468 kubelet[3272]: W0117 00:06:12.208407 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.208468 kubelet[3272]: E0117 00:06:12.208417 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.208706 kubelet[3272]: E0117 00:06:12.208695 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.208806 kubelet[3272]: W0117 00:06:12.208744 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.208806 kubelet[3272]: E0117 00:06:12.208756 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.209049 kubelet[3272]: E0117 00:06:12.208989 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.209049 kubelet[3272]: W0117 00:06:12.209003 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.209049 kubelet[3272]: E0117 00:06:12.209012 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.209333 kubelet[3272]: E0117 00:06:12.209268 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.209333 kubelet[3272]: W0117 00:06:12.209280 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.209333 kubelet[3272]: E0117 00:06:12.209291 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.209730 kubelet[3272]: E0117 00:06:12.209610 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.209730 kubelet[3272]: W0117 00:06:12.209629 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.209730 kubelet[3272]: E0117 00:06:12.209639 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.209934 kubelet[3272]: E0117 00:06:12.209892 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.209934 kubelet[3272]: W0117 00:06:12.209902 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.209934 kubelet[3272]: E0117 00:06:12.209912 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.210227 kubelet[3272]: E0117 00:06:12.210161 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.210227 kubelet[3272]: W0117 00:06:12.210171 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.210227 kubelet[3272]: E0117 00:06:12.210186 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.210448 kubelet[3272]: E0117 00:06:12.210438 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.210567 kubelet[3272]: W0117 00:06:12.210485 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.210567 kubelet[3272]: E0117 00:06:12.210497 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.210831 kubelet[3272]: E0117 00:06:12.210758 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.210831 kubelet[3272]: W0117 00:06:12.210770 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.210831 kubelet[3272]: E0117 00:06:12.210779 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.211111 kubelet[3272]: E0117 00:06:12.211049 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.211111 kubelet[3272]: W0117 00:06:12.211060 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.211111 kubelet[3272]: E0117 00:06:12.211070 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.211385 kubelet[3272]: E0117 00:06:12.211322 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.211385 kubelet[3272]: W0117 00:06:12.211334 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.211385 kubelet[3272]: E0117 00:06:12.211343 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.211710 kubelet[3272]: E0117 00:06:12.211627 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.211710 kubelet[3272]: W0117 00:06:12.211648 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.211710 kubelet[3272]: E0117 00:06:12.211657 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.221097 kubelet[3272]: E0117 00:06:12.221016 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.221097 kubelet[3272]: W0117 00:06:12.221028 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.221097 kubelet[3272]: E0117 00:06:12.221039 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.221406 kubelet[3272]: E0117 00:06:12.221350 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.221406 kubelet[3272]: W0117 00:06:12.221362 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.221406 kubelet[3272]: E0117 00:06:12.221372 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.221578 kubelet[3272]: E0117 00:06:12.221561 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.221578 kubelet[3272]: W0117 00:06:12.221578 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.221673 kubelet[3272]: E0117 00:06:12.221591 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.221760 kubelet[3272]: E0117 00:06:12.221748 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.221760 kubelet[3272]: W0117 00:06:12.221759 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.221833 kubelet[3272]: E0117 00:06:12.221768 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.221904 kubelet[3272]: E0117 00:06:12.221892 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.221904 kubelet[3272]: W0117 00:06:12.221901 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.221963 kubelet[3272]: E0117 00:06:12.221911 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.222071 kubelet[3272]: E0117 00:06:12.222060 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.222071 kubelet[3272]: W0117 00:06:12.222069 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.222240 kubelet[3272]: E0117 00:06:12.222077 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.222414 kubelet[3272]: E0117 00:06:12.222399 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.222537 kubelet[3272]: W0117 00:06:12.222471 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.222537 kubelet[3272]: E0117 00:06:12.222490 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.222917 kubelet[3272]: E0117 00:06:12.222804 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.222917 kubelet[3272]: W0117 00:06:12.222817 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.222917 kubelet[3272]: E0117 00:06:12.222828 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.223134 kubelet[3272]: E0117 00:06:12.223123 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.223301 kubelet[3272]: W0117 00:06:12.223229 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.223301 kubelet[3272]: E0117 00:06:12.223245 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.223589 kubelet[3272]: E0117 00:06:12.223499 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.223589 kubelet[3272]: W0117 00:06:12.223511 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.223589 kubelet[3272]: E0117 00:06:12.223520 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.223890 kubelet[3272]: E0117 00:06:12.223827 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.223890 kubelet[3272]: W0117 00:06:12.223839 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.223890 kubelet[3272]: E0117 00:06:12.223849 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.224193 kubelet[3272]: E0117 00:06:12.224122 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.224193 kubelet[3272]: W0117 00:06:12.224133 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.224193 kubelet[3272]: E0117 00:06:12.224144 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.224537 kubelet[3272]: E0117 00:06:12.224433 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.224537 kubelet[3272]: W0117 00:06:12.224445 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.224537 kubelet[3272]: E0117 00:06:12.224455 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.224770 kubelet[3272]: E0117 00:06:12.224698 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.224770 kubelet[3272]: W0117 00:06:12.224708 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.224770 kubelet[3272]: E0117 00:06:12.224718 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.225081 kubelet[3272]: E0117 00:06:12.224995 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.225081 kubelet[3272]: W0117 00:06:12.225007 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.225081 kubelet[3272]: E0117 00:06:12.225019 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.225419 kubelet[3272]: E0117 00:06:12.225311 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.225419 kubelet[3272]: W0117 00:06:12.225323 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.225419 kubelet[3272]: E0117 00:06:12.225332 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.225639 kubelet[3272]: E0117 00:06:12.225625 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.226042 kubelet[3272]: W0117 00:06:12.225789 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.226042 kubelet[3272]: E0117 00:06:12.225806 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.226636 kubelet[3272]: E0117 00:06:12.226621 3272 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:06:12.226720 kubelet[3272]: W0117 00:06:12.226708 3272 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:06:12.226870 kubelet[3272]: E0117 00:06:12.226771 3272 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:06:12.878571 containerd[1715]: time="2026-01-17T00:06:12.878239710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:12.880272 containerd[1715]: time="2026-01-17T00:06:12.880235991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 17 00:06:12.882522 containerd[1715]: time="2026-01-17T00:06:12.882480313Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:12.886210 containerd[1715]: time="2026-01-17T00:06:12.886180155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:12.887132 containerd[1715]: time="2026-01-17T00:06:12.886800715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.350661238s" Jan 17 00:06:12.887132 containerd[1715]: time="2026-01-17T00:06:12.886832155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 17 00:06:12.893455 containerd[1715]: time="2026-01-17T00:06:12.893346719Z" level=info msg="CreateContainer within sandbox \"3811d69cc81929dc909bfa89150e1828f1f090f13f4593649c120eaf5030318b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:06:12.925759 containerd[1715]: time="2026-01-17T00:06:12.925704098Z" level=info msg="CreateContainer within sandbox \"3811d69cc81929dc909bfa89150e1828f1f090f13f4593649c120eaf5030318b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"06a9fcdd0aaa3c92482ef8cf503bc4dbd93f5f43e4fc5d6c570a00487625f1aa\"" Jan 17 00:06:12.926468 containerd[1715]: time="2026-01-17T00:06:12.926339218Z" level=info msg="StartContainer for \"06a9fcdd0aaa3c92482ef8cf503bc4dbd93f5f43e4fc5d6c570a00487625f1aa\"" Jan 17 00:06:12.958817 systemd[1]: Started cri-containerd-06a9fcdd0aaa3c92482ef8cf503bc4dbd93f5f43e4fc5d6c570a00487625f1aa.scope - libcontainer container 06a9fcdd0aaa3c92482ef8cf503bc4dbd93f5f43e4fc5d6c570a00487625f1aa. Jan 17 00:06:12.988474 containerd[1715]: time="2026-01-17T00:06:12.987900215Z" level=info msg="StartContainer for \"06a9fcdd0aaa3c92482ef8cf503bc4dbd93f5f43e4fc5d6c570a00487625f1aa\" returns successfully" Jan 17 00:06:12.997322 systemd[1]: cri-containerd-06a9fcdd0aaa3c92482ef8cf503bc4dbd93f5f43e4fc5d6c570a00487625f1aa.scope: Deactivated successfully. Jan 17 00:06:13.019297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06a9fcdd0aaa3c92482ef8cf503bc4dbd93f5f43e4fc5d6c570a00487625f1aa-rootfs.mount: Deactivated successfully. Jan 17 00:06:13.044070 kubelet[3272]: E0117 00:06:13.044021 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:13.138605 kubelet[3272]: I0117 00:06:13.138474 3272 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:06:13.158044 kubelet[3272]: I0117 00:06:13.157677 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fd96d5d65-8n99j" podStartSLOduration=2.320796839 podStartE2EDuration="5.157660835s" podCreationTimestamp="2026-01-17 00:06:08 +0000 UTC" firstStartedPulling="2026-01-17 00:06:08.698574801 +0000 UTC m=+25.771520643" lastFinishedPulling="2026-01-17 00:06:11.535438757 +0000 UTC m=+28.608384639" observedRunningTime="2026-01-17 00:06:12.148701319 +0000 UTC m=+29.221647201" watchObservedRunningTime="2026-01-17 00:06:13.157660835 +0000 UTC m=+30.230606717" Jan 17 00:06:13.993597 containerd[1715]: time="2026-01-17T00:06:13.993372369Z" level=info msg="shim disconnected" id=06a9fcdd0aaa3c92482ef8cf503bc4dbd93f5f43e4fc5d6c570a00487625f1aa namespace=k8s.io Jan 17 00:06:13.993597 containerd[1715]: time="2026-01-17T00:06:13.993428089Z" level=warning msg="cleaning up after shim disconnected" id=06a9fcdd0aaa3c92482ef8cf503bc4dbd93f5f43e4fc5d6c570a00487625f1aa namespace=k8s.io Jan 17 00:06:13.993597 containerd[1715]: time="2026-01-17T00:06:13.993436569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:14.143722 containerd[1715]: time="2026-01-17T00:06:14.143672898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:06:15.043731 kubelet[3272]: E0117 00:06:15.042637 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:17.043805 kubelet[3272]: E0117 00:06:17.043392 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:19.045309 kubelet[3272]: E0117 00:06:19.044007 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:19.234564 containerd[1715]: time="2026-01-17T00:06:19.234432236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:19.237172 containerd[1715]: time="2026-01-17T00:06:19.236835717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 17 00:06:19.240011 containerd[1715]: time="2026-01-17T00:06:19.239977238Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:19.243832 containerd[1715]: time="2026-01-17T00:06:19.243791960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:19.244747 containerd[1715]: time="2026-01-17T00:06:19.244659681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 5.100941183s" Jan 17 00:06:19.244747 containerd[1715]: time="2026-01-17T00:06:19.244688201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 17 00:06:19.251063 containerd[1715]: time="2026-01-17T00:06:19.251027164Z" level=info msg="CreateContainer within sandbox \"3811d69cc81929dc909bfa89150e1828f1f090f13f4593649c120eaf5030318b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:06:19.272404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603622615.mount: Deactivated successfully. Jan 17 00:06:19.280082 containerd[1715]: time="2026-01-17T00:06:19.280043058Z" level=info msg="CreateContainer within sandbox \"3811d69cc81929dc909bfa89150e1828f1f090f13f4593649c120eaf5030318b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e618e34fe338480e0979650810729ae3130a383c6ea55d2dd414068e824cd6d\"" Jan 17 00:06:19.282036 containerd[1715]: time="2026-01-17T00:06:19.280777058Z" level=info msg="StartContainer for \"8e618e34fe338480e0979650810729ae3130a383c6ea55d2dd414068e824cd6d\"" Jan 17 00:06:19.311769 systemd[1]: Started cri-containerd-8e618e34fe338480e0979650810729ae3130a383c6ea55d2dd414068e824cd6d.scope - libcontainer container 8e618e34fe338480e0979650810729ae3130a383c6ea55d2dd414068e824cd6d. Jan 17 00:06:19.338220 containerd[1715]: time="2026-01-17T00:06:19.338172045Z" level=info msg="StartContainer for \"8e618e34fe338480e0979650810729ae3130a383c6ea55d2dd414068e824cd6d\" returns successfully" Jan 17 00:06:21.043074 kubelet[3272]: E0117 00:06:21.042669 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:23.043960 kubelet[3272]: E0117 00:06:23.042924 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:25.043735 kubelet[3272]: E0117 00:06:25.043373 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:28.417828 kubelet[3272]: E0117 00:06:27.042736 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:28.417828 kubelet[3272]: I0117 00:06:27.322676 3272 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:06:29.045329 kubelet[3272]: E0117 00:06:29.044948 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:29.639555 containerd[1715]: time="2026-01-17T00:06:29.639496546Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:06:29.643086 systemd[1]: cri-containerd-8e618e34fe338480e0979650810729ae3130a383c6ea55d2dd414068e824cd6d.scope: Deactivated successfully. Jan 17 00:06:29.664264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e618e34fe338480e0979650810729ae3130a383c6ea55d2dd414068e824cd6d-rootfs.mount: Deactivated successfully. Jan 17 00:06:30.880756 kubelet[3272]: I0117 00:06:29.699558 3272 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:06:33.082913 containerd[1715]: time="2026-01-17T00:06:33.081436070Z" level=info msg="shim disconnected" id=8e618e34fe338480e0979650810729ae3130a383c6ea55d2dd414068e824cd6d namespace=k8s.io Jan 17 00:06:33.082913 containerd[1715]: time="2026-01-17T00:06:33.082892590Z" level=warning msg="cleaning up after shim disconnected" id=8e618e34fe338480e0979650810729ae3130a383c6ea55d2dd414068e824cd6d namespace=k8s.io Jan 17 00:06:33.082913 containerd[1715]: time="2026-01-17T00:06:33.082906310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:33.086918 kubelet[3272]: I0117 00:06:33.086877 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndwm4\" (UniqueName: \"kubernetes.io/projected/acb3da73-aad2-4399-b6f1-7f3c1a0d99c5-kube-api-access-ndwm4\") pod \"coredns-674b8bbfcf-n869n\" (UID: \"acb3da73-aad2-4399-b6f1-7f3c1a0d99c5\") " pod="kube-system/coredns-674b8bbfcf-n869n" Jan 17 00:06:33.087962 kubelet[3272]: I0117 00:06:33.086925 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acb3da73-aad2-4399-b6f1-7f3c1a0d99c5-config-volume\") pod \"coredns-674b8bbfcf-n869n\" (UID: \"acb3da73-aad2-4399-b6f1-7f3c1a0d99c5\") " pod="kube-system/coredns-674b8bbfcf-n869n" Jan 17 00:06:33.097101 systemd[1]: Created slice kubepods-burstable-podacb3da73_aad2_4399_b6f1_7f3c1a0d99c5.slice - libcontainer container kubepods-burstable-podacb3da73_aad2_4399_b6f1_7f3c1a0d99c5.slice. Jan 17 00:06:33.103561 systemd[1]: Created slice kubepods-besteffort-pod8214a0c3_a0f7_40b6_915d_08cea6de347e.slice - libcontainer container kubepods-besteffort-pod8214a0c3_a0f7_40b6_915d_08cea6de347e.slice. Jan 17 00:06:33.109659 containerd[1715]: time="2026-01-17T00:06:33.109336041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7gm8,Uid:8214a0c3-a0f7-40b6-915d-08cea6de347e,Namespace:calico-system,Attempt:0,}" Jan 17 00:06:33.187297 kubelet[3272]: I0117 00:06:33.187261 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e747a046-268c-4a51-81e2-3f445b48b5cd-tigera-ca-bundle\") pod \"calico-kube-controllers-68ddb45bfc-grgqw\" (UID: \"e747a046-268c-4a51-81e2-3f445b48b5cd\") " pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" Jan 17 00:06:33.188427 kubelet[3272]: I0117 00:06:33.187590 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r22zc\" (UniqueName: \"kubernetes.io/projected/e747a046-268c-4a51-81e2-3f445b48b5cd-kube-api-access-r22zc\") pod \"calico-kube-controllers-68ddb45bfc-grgqw\" (UID: \"e747a046-268c-4a51-81e2-3f445b48b5cd\") " pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" Jan 17 00:06:33.247986 systemd[1]: Created slice kubepods-besteffort-pode2e5377f_9c87_4d0a_b448_a7595a3af9ad.slice - libcontainer container kubepods-besteffort-pode2e5377f_9c87_4d0a_b448_a7595a3af9ad.slice. Jan 17 00:06:33.253312 systemd[1]: Created slice kubepods-besteffort-pode747a046_268c_4a51_81e2_3f445b48b5cd.slice - libcontainer container kubepods-besteffort-pode747a046_268c_4a51_81e2_3f445b48b5cd.slice. Jan 17 00:06:33.288614 kubelet[3272]: I0117 00:06:33.288575 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e5377f-9c87-4d0a-b448-a7595a3af9ad-goldmane-ca-bundle\") pod \"goldmane-666569f655-vtx75\" (UID: \"e2e5377f-9c87-4d0a-b448-a7595a3af9ad\") " pod="calico-system/goldmane-666569f655-vtx75" Jan 17 00:06:33.288614 kubelet[3272]: I0117 00:06:33.288618 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e2e5377f-9c87-4d0a-b448-a7595a3af9ad-goldmane-key-pair\") pod \"goldmane-666569f655-vtx75\" (UID: \"e2e5377f-9c87-4d0a-b448-a7595a3af9ad\") " pod="calico-system/goldmane-666569f655-vtx75" Jan 17 00:06:33.288779 kubelet[3272]: I0117 00:06:33.288638 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzksz\" (UniqueName: \"kubernetes.io/projected/e2e5377f-9c87-4d0a-b448-a7595a3af9ad-kube-api-access-nzksz\") pod \"goldmane-666569f655-vtx75\" (UID: \"e2e5377f-9c87-4d0a-b448-a7595a3af9ad\") " pod="calico-system/goldmane-666569f655-vtx75" Jan 17 00:06:33.288779 kubelet[3272]: I0117 00:06:33.288669 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e5377f-9c87-4d0a-b448-a7595a3af9ad-config\") pod \"goldmane-666569f655-vtx75\" (UID: \"e2e5377f-9c87-4d0a-b448-a7595a3af9ad\") " pod="calico-system/goldmane-666569f655-vtx75" Jan 17 00:06:33.326265 systemd[1]: Created slice kubepods-burstable-pod616526de_5a58_4998_9f29_2aa2e02e1a8e.slice - libcontainer container kubepods-burstable-pod616526de_5a58_4998_9f29_2aa2e02e1a8e.slice. Jan 17 00:06:33.381896 systemd[1]: Created slice kubepods-besteffort-pod6e9ff54d_9f3a_4f62_92e0_56921b0f16ea.slice - libcontainer container kubepods-besteffort-pod6e9ff54d_9f3a_4f62_92e0_56921b0f16ea.slice. Jan 17 00:06:33.388889 kubelet[3272]: I0117 00:06:33.388843 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5ndq\" (UniqueName: \"kubernetes.io/projected/6e9ff54d-9f3a-4f62-92e0-56921b0f16ea-kube-api-access-g5ndq\") pod \"calico-apiserver-77bf786874-gphpw\" (UID: \"6e9ff54d-9f3a-4f62-92e0-56921b0f16ea\") " pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" Jan 17 00:06:33.389148 kubelet[3272]: I0117 00:06:33.388906 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616526de-5a58-4998-9f29-2aa2e02e1a8e-config-volume\") pod \"coredns-674b8bbfcf-pgmjl\" (UID: \"616526de-5a58-4998-9f29-2aa2e02e1a8e\") " pod="kube-system/coredns-674b8bbfcf-pgmjl" Jan 17 00:06:33.389148 kubelet[3272]: I0117 00:06:33.388946 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-595rb\" (UniqueName: \"kubernetes.io/projected/616526de-5a58-4998-9f29-2aa2e02e1a8e-kube-api-access-595rb\") pod \"coredns-674b8bbfcf-pgmjl\" (UID: \"616526de-5a58-4998-9f29-2aa2e02e1a8e\") " pod="kube-system/coredns-674b8bbfcf-pgmjl" Jan 17 00:06:33.389148 kubelet[3272]: I0117 00:06:33.388963 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e9ff54d-9f3a-4f62-92e0-56921b0f16ea-calico-apiserver-certs\") pod \"calico-apiserver-77bf786874-gphpw\" (UID: \"6e9ff54d-9f3a-4f62-92e0-56921b0f16ea\") " pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" Jan 17 00:06:33.402184 containerd[1715]: time="2026-01-17T00:06:33.402141401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n869n,Uid:acb3da73-aad2-4399-b6f1-7f3c1a0d99c5,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:33.445108 containerd[1715]: time="2026-01-17T00:06:33.445073619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:06:33.450581 systemd[1]: Created slice kubepods-besteffort-pod95a88bcc_84de_4477_8213_8107aa51e41e.slice - libcontainer container kubepods-besteffort-pod95a88bcc_84de_4477_8213_8107aa51e41e.slice. Jan 17 00:06:33.456782 systemd[1]: Created slice kubepods-besteffort-pod9340ab9f_05b7_44f8_b60d_bcae76bd89d3.slice - libcontainer container kubepods-besteffort-pod9340ab9f_05b7_44f8_b60d_bcae76bd89d3.slice. Jan 17 00:06:33.491184 kubelet[3272]: I0117 00:06:33.489857 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9340ab9f-05b7-44f8-b60d-bcae76bd89d3-calico-apiserver-certs\") pod \"calico-apiserver-77bf786874-qhq5d\" (UID: \"9340ab9f-05b7-44f8-b60d-bcae76bd89d3\") " pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" Jan 17 00:06:33.491184 kubelet[3272]: I0117 00:06:33.489939 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95a88bcc-84de-4477-8213-8107aa51e41e-whisker-backend-key-pair\") pod \"whisker-5f8586bb9c-lqdcp\" (UID: \"95a88bcc-84de-4477-8213-8107aa51e41e\") " pod="calico-system/whisker-5f8586bb9c-lqdcp" Jan 17 00:06:33.491184 kubelet[3272]: I0117 00:06:33.489956 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95a88bcc-84de-4477-8213-8107aa51e41e-whisker-ca-bundle\") pod \"whisker-5f8586bb9c-lqdcp\" (UID: \"95a88bcc-84de-4477-8213-8107aa51e41e\") " pod="calico-system/whisker-5f8586bb9c-lqdcp" Jan 17 00:06:33.491184 kubelet[3272]: I0117 00:06:33.489972 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8cwm\" (UniqueName: \"kubernetes.io/projected/95a88bcc-84de-4477-8213-8107aa51e41e-kube-api-access-j8cwm\") pod \"whisker-5f8586bb9c-lqdcp\" (UID: \"95a88bcc-84de-4477-8213-8107aa51e41e\") " pod="calico-system/whisker-5f8586bb9c-lqdcp" Jan 17 00:06:33.491184 kubelet[3272]: I0117 00:06:33.490044 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmd54\" (UniqueName: \"kubernetes.io/projected/9340ab9f-05b7-44f8-b60d-bcae76bd89d3-kube-api-access-lmd54\") pod \"calico-apiserver-77bf786874-qhq5d\" (UID: \"9340ab9f-05b7-44f8-b60d-bcae76bd89d3\") " pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" Jan 17 00:06:33.540218 containerd[1715]: time="2026-01-17T00:06:33.540162498Z" level=error msg="Failed to destroy network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.540481 containerd[1715]: time="2026-01-17T00:06:33.540454818Z" level=error msg="encountered an error cleaning up failed sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.540519 containerd[1715]: time="2026-01-17T00:06:33.540502378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7gm8,Uid:8214a0c3-a0f7-40b6-915d-08cea6de347e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.541965 kubelet[3272]: E0117 00:06:33.541727 3272 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.541965 kubelet[3272]: E0117 00:06:33.541796 3272 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7gm8" Jan 17 00:06:33.541965 kubelet[3272]: E0117 00:06:33.541815 3272 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7gm8" Jan 17 00:06:33.542104 kubelet[3272]: E0117 00:06:33.541864 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:33.551627 containerd[1715]: time="2026-01-17T00:06:33.551586742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vtx75,Uid:e2e5377f-9c87-4d0a-b448-a7595a3af9ad,Namespace:calico-system,Attempt:0,}" Jan 17 00:06:33.556412 containerd[1715]: time="2026-01-17T00:06:33.556376184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ddb45bfc-grgqw,Uid:e747a046-268c-4a51-81e2-3f445b48b5cd,Namespace:calico-system,Attempt:0,}" Jan 17 00:06:33.561106 containerd[1715]: time="2026-01-17T00:06:33.561051066Z" level=error msg="Failed to destroy network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.561366 containerd[1715]: time="2026-01-17T00:06:33.561342986Z" level=error msg="encountered an error cleaning up failed sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.561422 containerd[1715]: time="2026-01-17T00:06:33.561397866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n869n,Uid:acb3da73-aad2-4399-b6f1-7f3c1a0d99c5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.561730 kubelet[3272]: E0117 00:06:33.561632 3272 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.561802 kubelet[3272]: E0117 00:06:33.561732 3272 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n869n" Jan 17 00:06:33.561802 kubelet[3272]: E0117 00:06:33.561776 3272 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n869n" Jan 17 00:06:33.562045 kubelet[3272]: E0117 00:06:33.561847 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n869n_kube-system(acb3da73-aad2-4399-b6f1-7f3c1a0d99c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n869n_kube-system(acb3da73-aad2-4399-b6f1-7f3c1a0d99c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n869n" podUID="acb3da73-aad2-4399-b6f1-7f3c1a0d99c5" Jan 17 00:06:33.668193 containerd[1715]: time="2026-01-17T00:06:33.668069510Z" level=error msg="Failed to destroy network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.669058 containerd[1715]: time="2026-01-17T00:06:33.668876750Z" level=error msg="encountered an error cleaning up failed sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.669058 containerd[1715]: time="2026-01-17T00:06:33.669010910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ddb45bfc-grgqw,Uid:e747a046-268c-4a51-81e2-3f445b48b5cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.669792 kubelet[3272]: E0117 00:06:33.669443 3272 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.669792 kubelet[3272]: E0117 00:06:33.669511 3272 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" Jan 17 00:06:33.669792 kubelet[3272]: E0117 00:06:33.669553 3272 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" Jan 17 00:06:33.669921 kubelet[3272]: E0117 00:06:33.669625 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68ddb45bfc-grgqw_calico-system(e747a046-268c-4a51-81e2-3f445b48b5cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68ddb45bfc-grgqw_calico-system(e747a046-268c-4a51-81e2-3f445b48b5cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:06:33.670385 containerd[1715]: time="2026-01-17T00:06:33.670271351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pgmjl,Uid:616526de-5a58-4998-9f29-2aa2e02e1a8e,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:33.671650 containerd[1715]: time="2026-01-17T00:06:33.671613712Z" level=error msg="Failed to destroy network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.671968 containerd[1715]: time="2026-01-17T00:06:33.671930152Z" level=error msg="encountered an error cleaning up failed sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.672047 containerd[1715]: time="2026-01-17T00:06:33.671987232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vtx75,Uid:e2e5377f-9c87-4d0a-b448-a7595a3af9ad,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.672211 kubelet[3272]: E0117 00:06:33.672180 3272 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.672258 kubelet[3272]: E0117 00:06:33.672227 3272 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vtx75" Jan 17 00:06:33.672258 kubelet[3272]: E0117 00:06:33.672247 3272 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vtx75" Jan 17 00:06:33.672315 kubelet[3272]: E0117 00:06:33.672289 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vtx75_calico-system(e2e5377f-9c87-4d0a-b448-a7595a3af9ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vtx75_calico-system(e2e5377f-9c87-4d0a-b448-a7595a3af9ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:06:33.685781 containerd[1715]: time="2026-01-17T00:06:33.685496637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf786874-gphpw,Uid:6e9ff54d-9f3a-4f62-92e0-56921b0f16ea,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:06:33.755768 containerd[1715]: time="2026-01-17T00:06:33.755730346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f8586bb9c-lqdcp,Uid:95a88bcc-84de-4477-8213-8107aa51e41e,Namespace:calico-system,Attempt:0,}" Jan 17 00:06:33.764029 containerd[1715]: time="2026-01-17T00:06:33.763943909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf786874-qhq5d,Uid:9340ab9f-05b7-44f8-b60d-bcae76bd89d3,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:06:33.816980 containerd[1715]: time="2026-01-17T00:06:33.816924691Z" level=error msg="Failed to destroy network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.822764 containerd[1715]: time="2026-01-17T00:06:33.822623013Z" level=error msg="encountered an error cleaning up failed sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.822764 containerd[1715]: time="2026-01-17T00:06:33.822686974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pgmjl,Uid:616526de-5a58-4998-9f29-2aa2e02e1a8e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.824162 kubelet[3272]: E0117 00:06:33.823127 3272 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.824162 kubelet[3272]: E0117 00:06:33.823191 3272 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pgmjl" Jan 17 00:06:33.824162 kubelet[3272]: E0117 00:06:33.823209 3272 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pgmjl" Jan 17 00:06:33.824376 kubelet[3272]: E0117 00:06:33.823251 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pgmjl_kube-system(616526de-5a58-4998-9f29-2aa2e02e1a8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pgmjl_kube-system(616526de-5a58-4998-9f29-2aa2e02e1a8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pgmjl" podUID="616526de-5a58-4998-9f29-2aa2e02e1a8e" Jan 17 00:06:33.829628 containerd[1715]: time="2026-01-17T00:06:33.829579696Z" level=error msg="Failed to destroy network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.830321 containerd[1715]: time="2026-01-17T00:06:33.830289817Z" level=error msg="encountered an error cleaning up failed sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.830586 containerd[1715]: time="2026-01-17T00:06:33.830463417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf786874-gphpw,Uid:6e9ff54d-9f3a-4f62-92e0-56921b0f16ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.831610 kubelet[3272]: E0117 00:06:33.831109 3272 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.831610 kubelet[3272]: E0117 00:06:33.831161 3272 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" Jan 17 00:06:33.831610 kubelet[3272]: E0117 00:06:33.831180 3272 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" Jan 17 00:06:33.831748 kubelet[3272]: E0117 00:06:33.831221 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77bf786874-gphpw_calico-apiserver(6e9ff54d-9f3a-4f62-92e0-56921b0f16ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77bf786874-gphpw_calico-apiserver(6e9ff54d-9f3a-4f62-92e0-56921b0f16ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:06:33.901196 containerd[1715]: time="2026-01-17T00:06:33.901131246Z" level=error msg="Failed to destroy network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.903924 containerd[1715]: time="2026-01-17T00:06:33.903821607Z" level=error msg="encountered an error cleaning up failed sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.904120 containerd[1715]: time="2026-01-17T00:06:33.903898127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f8586bb9c-lqdcp,Uid:95a88bcc-84de-4477-8213-8107aa51e41e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.904568 kubelet[3272]: E0117 00:06:33.904406 3272 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.904568 kubelet[3272]: E0117 00:06:33.904466 3272 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f8586bb9c-lqdcp" Jan 17 00:06:33.904568 kubelet[3272]: E0117 00:06:33.904499 3272 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f8586bb9c-lqdcp" Jan 17 00:06:33.904699 kubelet[3272]: E0117 00:06:33.904563 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f8586bb9c-lqdcp_calico-system(95a88bcc-84de-4477-8213-8107aa51e41e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f8586bb9c-lqdcp_calico-system(95a88bcc-84de-4477-8213-8107aa51e41e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f8586bb9c-lqdcp" podUID="95a88bcc-84de-4477-8213-8107aa51e41e" Jan 17 00:06:33.923551 containerd[1715]: time="2026-01-17T00:06:33.923427575Z" level=error msg="Failed to destroy network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.925001 containerd[1715]: time="2026-01-17T00:06:33.924851295Z" level=error msg="encountered an error cleaning up failed sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.925001 containerd[1715]: time="2026-01-17T00:06:33.924905615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf786874-qhq5d,Uid:9340ab9f-05b7-44f8-b60d-bcae76bd89d3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.925179 kubelet[3272]: E0117 00:06:33.925110 3272 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:33.925179 kubelet[3272]: E0117 00:06:33.925163 3272 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" Jan 17 00:06:33.925252 kubelet[3272]: E0117 00:06:33.925185 3272 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" Jan 17 00:06:33.925277 kubelet[3272]: E0117 00:06:33.925249 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77bf786874-qhq5d_calico-apiserver(9340ab9f-05b7-44f8-b60d-bcae76bd89d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77bf786874-qhq5d_calico-apiserver(9340ab9f-05b7-44f8-b60d-bcae76bd89d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:06:34.241417 kubelet[3272]: I0117 00:06:34.241364 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:06:34.242260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c-shm.mount: Deactivated successfully. Jan 17 00:06:34.242674 containerd[1715]: time="2026-01-17T00:06:34.242550426Z" level=info msg="StopPodSandbox for \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\"" Jan 17 00:06:34.242836 containerd[1715]: time="2026-01-17T00:06:34.242712546Z" level=info msg="Ensure that sandbox 18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e in task-service has been cleanup successfully" Jan 17 00:06:34.246860 kubelet[3272]: I0117 00:06:34.246841 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:06:34.248103 containerd[1715]: time="2026-01-17T00:06:34.247651548Z" level=info msg="StopPodSandbox for \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\"" Jan 17 00:06:34.249552 containerd[1715]: time="2026-01-17T00:06:34.249047388Z" level=info msg="Ensure that sandbox a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7 in task-service has been cleanup successfully" Jan 17 00:06:34.253065 containerd[1715]: time="2026-01-17T00:06:34.253038230Z" level=info msg="StopPodSandbox for \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\"" Jan 17 00:06:34.253184 kubelet[3272]: I0117 00:06:34.252516 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:06:34.253655 containerd[1715]: time="2026-01-17T00:06:34.253557470Z" level=info msg="Ensure that sandbox c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92 in task-service has been cleanup successfully" Jan 17 00:06:34.256095 kubelet[3272]: I0117 00:06:34.256017 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:06:34.258426 containerd[1715]: time="2026-01-17T00:06:34.258254792Z" level=info msg="StopPodSandbox for \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\"" Jan 17 00:06:34.258496 containerd[1715]: time="2026-01-17T00:06:34.258453912Z" level=info msg="Ensure that sandbox 4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498 in task-service has been cleanup successfully" Jan 17 00:06:34.261654 kubelet[3272]: I0117 00:06:34.261186 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:06:34.264470 containerd[1715]: time="2026-01-17T00:06:34.263452354Z" level=info msg="StopPodSandbox for \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\"" Jan 17 00:06:34.264470 containerd[1715]: time="2026-01-17T00:06:34.263619514Z" level=info msg="Ensure that sandbox 5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f in task-service has been cleanup successfully" Jan 17 00:06:34.269551 kubelet[3272]: I0117 00:06:34.269040 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:06:34.269812 containerd[1715]: time="2026-01-17T00:06:34.269783637Z" level=info msg="StopPodSandbox for \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\"" Jan 17 00:06:34.270044 containerd[1715]: time="2026-01-17T00:06:34.270025837Z" level=info msg="Ensure that sandbox af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd in task-service has been cleanup successfully" Jan 17 00:06:34.280026 kubelet[3272]: I0117 00:06:34.278943 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:06:34.280157 containerd[1715]: time="2026-01-17T00:06:34.279890401Z" level=info msg="StopPodSandbox for \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\"" Jan 17 00:06:34.280157 containerd[1715]: time="2026-01-17T00:06:34.280078961Z" level=info msg="Ensure that sandbox 61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c in task-service has been cleanup successfully" Jan 17 00:06:34.284481 kubelet[3272]: I0117 00:06:34.284357 3272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:34.285733 containerd[1715]: time="2026-01-17T00:06:34.285667443Z" level=info msg="StopPodSandbox for \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\"" Jan 17 00:06:34.286547 containerd[1715]: time="2026-01-17T00:06:34.286176364Z" level=info msg="Ensure that sandbox 0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c in task-service has been cleanup successfully" Jan 17 00:06:34.339640 containerd[1715]: time="2026-01-17T00:06:34.339589425Z" level=error msg="StopPodSandbox for \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\" failed" error="failed to destroy network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:34.339916 kubelet[3272]: E0117 00:06:34.339884 3272 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:06:34.340061 kubelet[3272]: E0117 00:06:34.340024 3272 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498"} Jan 17 00:06:34.340184 kubelet[3272]: E0117 00:06:34.340131 3272 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"616526de-5a58-4998-9f29-2aa2e02e1a8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:06:34.340184 kubelet[3272]: E0117 00:06:34.340158 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"616526de-5a58-4998-9f29-2aa2e02e1a8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pgmjl" podUID="616526de-5a58-4998-9f29-2aa2e02e1a8e" Jan 17 00:06:34.354702 containerd[1715]: time="2026-01-17T00:06:34.354654552Z" level=error msg="StopPodSandbox for \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\" failed" error="failed to destroy network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:34.355048 kubelet[3272]: E0117 00:06:34.355014 3272 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:06:34.355182 kubelet[3272]: E0117 00:06:34.355164 3272 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd"} Jan 17 00:06:34.355259 kubelet[3272]: E0117 00:06:34.355245 3272 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"acb3da73-aad2-4399-b6f1-7f3c1a0d99c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:06:34.355368 kubelet[3272]: E0117 00:06:34.355343 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"acb3da73-aad2-4399-b6f1-7f3c1a0d99c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n869n" podUID="acb3da73-aad2-4399-b6f1-7f3c1a0d99c5" Jan 17 00:06:34.361560 containerd[1715]: time="2026-01-17T00:06:34.361480274Z" level=error msg="StopPodSandbox for \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\" failed" error="failed to destroy network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:34.362120 kubelet[3272]: E0117 00:06:34.362085 3272 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:34.362292 kubelet[3272]: E0117 00:06:34.362262 3272 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c"} Jan 17 00:06:34.362383 kubelet[3272]: E0117 00:06:34.362358 3272 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95a88bcc-84de-4477-8213-8107aa51e41e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:06:34.362515 kubelet[3272]: E0117 00:06:34.362488 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95a88bcc-84de-4477-8213-8107aa51e41e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f8586bb9c-lqdcp" podUID="95a88bcc-84de-4477-8213-8107aa51e41e" Jan 17 00:06:34.363199 containerd[1715]: time="2026-01-17T00:06:34.363159635Z" level=error msg="StopPodSandbox for \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\" failed" error="failed to destroy network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:34.363454 kubelet[3272]: E0117 00:06:34.363410 3272 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:06:34.363569 kubelet[3272]: E0117 00:06:34.363553 3272 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92"} Jan 17 00:06:34.363688 kubelet[3272]: E0117 00:06:34.363653 3272 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9340ab9f-05b7-44f8-b60d-bcae76bd89d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:06:34.363860 kubelet[3272]: E0117 00:06:34.363839 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9340ab9f-05b7-44f8-b60d-bcae76bd89d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:06:34.366002 containerd[1715]: time="2026-01-17T00:06:34.365969596Z" level=error msg="StopPodSandbox for \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\" failed" error="failed to destroy network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:34.366217 kubelet[3272]: E0117 00:06:34.366195 3272 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:06:34.367560 kubelet[3272]: E0117 00:06:34.367472 3272 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7"} Jan 17 00:06:34.367560 kubelet[3272]: E0117 00:06:34.367508 3272 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e747a046-268c-4a51-81e2-3f445b48b5cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:06:34.367560 kubelet[3272]: E0117 00:06:34.367534 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e747a046-268c-4a51-81e2-3f445b48b5cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:06:34.369886 containerd[1715]: time="2026-01-17T00:06:34.369849998Z" level=error msg="StopPodSandbox for \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\" failed" error="failed to destroy network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:34.370336 kubelet[3272]: E0117 00:06:34.370240 3272 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:06:34.370336 kubelet[3272]: E0117 00:06:34.370271 3272 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e"} Jan 17 00:06:34.370336 kubelet[3272]: E0117 00:06:34.370293 3272 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e9ff54d-9f3a-4f62-92e0-56921b0f16ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:06:34.370336 kubelet[3272]: E0117 00:06:34.370309 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e9ff54d-9f3a-4f62-92e0-56921b0f16ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:06:34.377027 containerd[1715]: time="2026-01-17T00:06:34.376966721Z" level=error msg="StopPodSandbox for \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\" failed" error="failed to destroy network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:34.377410 kubelet[3272]: E0117 00:06:34.377162 3272 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:06:34.377410 kubelet[3272]: E0117 00:06:34.377203 3272 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c"} Jan 17 00:06:34.377410 kubelet[3272]: E0117 00:06:34.377226 3272 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8214a0c3-a0f7-40b6-915d-08cea6de347e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:06:34.377410 kubelet[3272]: E0117 00:06:34.377308 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8214a0c3-a0f7-40b6-915d-08cea6de347e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:34.378575 containerd[1715]: time="2026-01-17T00:06:34.378508161Z" level=error msg="StopPodSandbox for \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\" failed" error="failed to destroy network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:06:34.378710 kubelet[3272]: E0117 00:06:34.378682 3272 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:06:34.378757 kubelet[3272]: E0117 00:06:34.378716 3272 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f"} Jan 17 00:06:34.378757 kubelet[3272]: E0117 00:06:34.378739 3272 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2e5377f-9c87-4d0a-b448-a7595a3af9ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:06:34.378828 kubelet[3272]: E0117 00:06:34.378758 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2e5377f-9c87-4d0a-b448-a7595a3af9ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:06:40.192127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909301861.mount: Deactivated successfully. Jan 17 00:06:40.233555 containerd[1715]: time="2026-01-17T00:06:40.233467158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:40.235575 containerd[1715]: time="2026-01-17T00:06:40.235414879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 17 00:06:40.238550 containerd[1715]: time="2026-01-17T00:06:40.238056240Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:40.254388 containerd[1715]: time="2026-01-17T00:06:40.254343526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:40.255096 containerd[1715]: time="2026-01-17T00:06:40.255071407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.809961628s" Jan 17 00:06:40.255150 containerd[1715]: time="2026-01-17T00:06:40.255101367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 17 00:06:40.285716 containerd[1715]: time="2026-01-17T00:06:40.285670099Z" level=info msg="CreateContainer within sandbox \"3811d69cc81929dc909bfa89150e1828f1f090f13f4593649c120eaf5030318b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:06:40.324533 containerd[1715]: time="2026-01-17T00:06:40.324488794Z" level=info msg="CreateContainer within sandbox \"3811d69cc81929dc909bfa89150e1828f1f090f13f4593649c120eaf5030318b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"13bf945577703dd2a50116862c9693b361bd886aacfa4aed1e950d75534bdc64\"" Jan 17 00:06:40.326566 containerd[1715]: time="2026-01-17T00:06:40.325261475Z" level=info msg="StartContainer for \"13bf945577703dd2a50116862c9693b361bd886aacfa4aed1e950d75534bdc64\"" Jan 17 00:06:40.351682 systemd[1]: Started cri-containerd-13bf945577703dd2a50116862c9693b361bd886aacfa4aed1e950d75534bdc64.scope - libcontainer container 13bf945577703dd2a50116862c9693b361bd886aacfa4aed1e950d75534bdc64. Jan 17 00:06:40.378111 containerd[1715]: time="2026-01-17T00:06:40.378003856Z" level=info msg="StartContainer for \"13bf945577703dd2a50116862c9693b361bd886aacfa4aed1e950d75534bdc64\" returns successfully" Jan 17 00:06:40.717916 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:06:40.718034 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:06:40.820711 containerd[1715]: time="2026-01-17T00:06:40.819480951Z" level=info msg="StopPodSandbox for \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\"" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.938 [INFO][4497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.938 [INFO][4497] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" iface="eth0" netns="/var/run/netns/cni-0a746822-c0b3-4ae1-ba88-eb9d4fa8612c" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.938 [INFO][4497] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" iface="eth0" netns="/var/run/netns/cni-0a746822-c0b3-4ae1-ba88-eb9d4fa8612c" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.939 [INFO][4497] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" iface="eth0" netns="/var/run/netns/cni-0a746822-c0b3-4ae1-ba88-eb9d4fa8612c" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.939 [INFO][4497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.939 [INFO][4497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.977 [INFO][4508] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" HandleID="k8s-pod-network.0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.978 [INFO][4508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.978 [INFO][4508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.988 [WARNING][4508] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" HandleID="k8s-pod-network.0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.988 [INFO][4508] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" HandleID="k8s-pod-network.0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.990 [INFO][4508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:40.996237 containerd[1715]: 2026-01-17 00:06:40.993 [INFO][4497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:40.997713 containerd[1715]: time="2026-01-17T00:06:40.997141262Z" level=info msg="TearDown network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\" successfully" Jan 17 00:06:40.997713 containerd[1715]: time="2026-01-17T00:06:40.997178942Z" level=info msg="StopPodSandbox for \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\" returns successfully" Jan 17 00:06:41.053813 kubelet[3272]: I0117 00:06:41.051655 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95a88bcc-84de-4477-8213-8107aa51e41e-whisker-backend-key-pair\") pod \"95a88bcc-84de-4477-8213-8107aa51e41e\" (UID: \"95a88bcc-84de-4477-8213-8107aa51e41e\") " Jan 17 00:06:41.053813 kubelet[3272]: I0117 00:06:41.051725 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95a88bcc-84de-4477-8213-8107aa51e41e-whisker-ca-bundle\") pod \"95a88bcc-84de-4477-8213-8107aa51e41e\" (UID: \"95a88bcc-84de-4477-8213-8107aa51e41e\") " Jan 17 00:06:41.053813 kubelet[3272]: I0117 00:06:41.051752 3272 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8cwm\" (UniqueName: \"kubernetes.io/projected/95a88bcc-84de-4477-8213-8107aa51e41e-kube-api-access-j8cwm\") pod \"95a88bcc-84de-4477-8213-8107aa51e41e\" (UID: \"95a88bcc-84de-4477-8213-8107aa51e41e\") " Jan 17 00:06:41.055341 kubelet[3272]: I0117 00:06:41.055259 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a88bcc-84de-4477-8213-8107aa51e41e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "95a88bcc-84de-4477-8213-8107aa51e41e" (UID: "95a88bcc-84de-4477-8213-8107aa51e41e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:06:41.057378 kubelet[3272]: I0117 00:06:41.057352 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a88bcc-84de-4477-8213-8107aa51e41e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "95a88bcc-84de-4477-8213-8107aa51e41e" (UID: "95a88bcc-84de-4477-8213-8107aa51e41e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:06:41.057918 kubelet[3272]: I0117 00:06:41.057882 3272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a88bcc-84de-4477-8213-8107aa51e41e-kube-api-access-j8cwm" (OuterVolumeSpecName: "kube-api-access-j8cwm") pod "95a88bcc-84de-4477-8213-8107aa51e41e" (UID: "95a88bcc-84de-4477-8213-8107aa51e41e"). InnerVolumeSpecName "kube-api-access-j8cwm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:06:41.152024 kubelet[3272]: I0117 00:06:41.151985 3272 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95a88bcc-84de-4477-8213-8107aa51e41e-whisker-ca-bundle\") on node \"ci-4081.3.6-n-f5e0a482e1\" DevicePath \"\"" Jan 17 00:06:41.152024 kubelet[3272]: I0117 00:06:41.152020 3272 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j8cwm\" (UniqueName: \"kubernetes.io/projected/95a88bcc-84de-4477-8213-8107aa51e41e-kube-api-access-j8cwm\") on node \"ci-4081.3.6-n-f5e0a482e1\" DevicePath \"\"" Jan 17 00:06:41.152024 kubelet[3272]: I0117 00:06:41.152031 3272 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95a88bcc-84de-4477-8213-8107aa51e41e-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-f5e0a482e1\" DevicePath \"\"" Jan 17 00:06:41.193097 systemd[1]: run-netns-cni\x2d0a746822\x2dc0b3\x2d4ae1\x2dba88\x2deb9d4fa8612c.mount: Deactivated successfully. Jan 17 00:06:41.193183 systemd[1]: var-lib-kubelet-pods-95a88bcc\x2d84de\x2d4477\x2d8213\x2d8107aa51e41e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj8cwm.mount: Deactivated successfully. Jan 17 00:06:41.193237 systemd[1]: var-lib-kubelet-pods-95a88bcc\x2d84de\x2d4477\x2d8213\x2d8107aa51e41e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:06:41.323576 systemd[1]: Removed slice kubepods-besteffort-pod95a88bcc_84de_4477_8213_8107aa51e41e.slice - libcontainer container kubepods-besteffort-pod95a88bcc_84de_4477_8213_8107aa51e41e.slice. Jan 17 00:06:41.346257 systemd[1]: run-containerd-runc-k8s.io-13bf945577703dd2a50116862c9693b361bd886aacfa4aed1e950d75534bdc64-runc.f8iwBA.mount: Deactivated successfully. Jan 17 00:06:41.353184 kubelet[3272]: I0117 00:06:41.351298 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-26k4t" podStartSLOduration=1.956539213 podStartE2EDuration="33.351284043s" podCreationTimestamp="2026-01-17 00:06:08 +0000 UTC" firstStartedPulling="2026-01-17 00:06:08.861193977 +0000 UTC m=+25.934139819" lastFinishedPulling="2026-01-17 00:06:40.255938807 +0000 UTC m=+57.328884649" observedRunningTime="2026-01-17 00:06:41.351141563 +0000 UTC m=+58.424087445" watchObservedRunningTime="2026-01-17 00:06:41.351284043 +0000 UTC m=+58.424229885" Jan 17 00:06:41.455569 systemd[1]: Created slice kubepods-besteffort-pod8057ab60_fa20_42e9_a7e5_844713387641.slice - libcontainer container kubepods-besteffort-pod8057ab60_fa20_42e9_a7e5_844713387641.slice. Jan 17 00:06:41.555745 kubelet[3272]: I0117 00:06:41.555706 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77t8\" (UniqueName: \"kubernetes.io/projected/8057ab60-fa20-42e9-a7e5-844713387641-kube-api-access-t77t8\") pod \"whisker-6b946cd94f-7mkrh\" (UID: \"8057ab60-fa20-42e9-a7e5-844713387641\") " pod="calico-system/whisker-6b946cd94f-7mkrh" Jan 17 00:06:41.556035 kubelet[3272]: I0117 00:06:41.555940 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8057ab60-fa20-42e9-a7e5-844713387641-whisker-ca-bundle\") pod \"whisker-6b946cd94f-7mkrh\" (UID: \"8057ab60-fa20-42e9-a7e5-844713387641\") " pod="calico-system/whisker-6b946cd94f-7mkrh" Jan 17 00:06:41.556035 kubelet[3272]: I0117 00:06:41.556004 3272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8057ab60-fa20-42e9-a7e5-844713387641-whisker-backend-key-pair\") pod \"whisker-6b946cd94f-7mkrh\" (UID: \"8057ab60-fa20-42e9-a7e5-844713387641\") " pod="calico-system/whisker-6b946cd94f-7mkrh" Jan 17 00:06:41.759651 containerd[1715]: time="2026-01-17T00:06:41.759563806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b946cd94f-7mkrh,Uid:8057ab60-fa20-42e9-a7e5-844713387641,Namespace:calico-system,Attempt:0,}" Jan 17 00:06:42.925619 kernel: bpftool[4718]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:06:42.969916 systemd-networkd[1356]: calia5b52ad32b9: Link UP Jan 17 00:06:42.978110 systemd-networkd[1356]: calia5b52ad32b9: Gained carrier Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.780 [INFO][4667] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.802 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0 whisker-6b946cd94f- calico-system 8057ab60-fa20-42e9-a7e5-844713387641 976 0 2026-01-17 00:06:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b946cd94f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-f5e0a482e1 whisker-6b946cd94f-7mkrh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia5b52ad32b9 [] [] }} ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Namespace="calico-system" Pod="whisker-6b946cd94f-7mkrh" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.802 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Namespace="calico-system" Pod="whisker-6b946cd94f-7mkrh" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.838 [INFO][4687] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" HandleID="k8s-pod-network.5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.838 [INFO][4687] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" HandleID="k8s-pod-network.5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f5e0a482e1", "pod":"whisker-6b946cd94f-7mkrh", "timestamp":"2026-01-17 00:06:42.838358515 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f5e0a482e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.838 [INFO][4687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.838 [INFO][4687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.839 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f5e0a482e1' Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.851 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.855 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.859 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.861 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.865 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.865 [INFO][4687] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.867 [INFO][4687] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727 Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.872 [INFO][4687] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.883 [INFO][4687] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.129/26] block=192.168.38.128/26 handle="k8s-pod-network.5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.883 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.129/26] handle="k8s-pod-network.5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.883 [INFO][4687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:43.051775 containerd[1715]: 2026-01-17 00:06:42.883 [INFO][4687] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.129/26] IPv6=[] ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" HandleID="k8s-pod-network.5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" Jan 17 00:06:43.053224 containerd[1715]: 2026-01-17 00:06:42.886 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Namespace="calico-system" Pod="whisker-6b946cd94f-7mkrh" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0", GenerateName:"whisker-6b946cd94f-", Namespace:"calico-system", SelfLink:"", UID:"8057ab60-fa20-42e9-a7e5-844713387641", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b946cd94f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"", Pod:"whisker-6b946cd94f-7mkrh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5b52ad32b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:43.053224 containerd[1715]: 2026-01-17 00:06:42.886 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.129/32] ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Namespace="calico-system" Pod="whisker-6b946cd94f-7mkrh" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" Jan 17 00:06:43.053224 containerd[1715]: 2026-01-17 00:06:42.886 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5b52ad32b9 ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Namespace="calico-system" Pod="whisker-6b946cd94f-7mkrh" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" Jan 17 00:06:43.053224 containerd[1715]: 2026-01-17 00:06:42.990 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Namespace="calico-system" Pod="whisker-6b946cd94f-7mkrh" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" Jan 17 00:06:43.053224 containerd[1715]: 2026-01-17 00:06:42.990 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Namespace="calico-system" Pod="whisker-6b946cd94f-7mkrh" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0", GenerateName:"whisker-6b946cd94f-", Namespace:"calico-system", SelfLink:"", UID:"8057ab60-fa20-42e9-a7e5-844713387641", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b946cd94f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727", Pod:"whisker-6b946cd94f-7mkrh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5b52ad32b9", MAC:"b2:1d:60:41:70:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:43.053224 containerd[1715]: 2026-01-17 00:06:43.047 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727" Namespace="calico-system" Pod="whisker-6b946cd94f-7mkrh" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--6b946cd94f--7mkrh-eth0" Jan 17 00:06:43.073628 kubelet[3272]: I0117 00:06:43.073409 3272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a88bcc-84de-4477-8213-8107aa51e41e" path="/var/lib/kubelet/pods/95a88bcc-84de-4477-8213-8107aa51e41e/volumes" Jan 17 00:06:43.074538 containerd[1715]: time="2026-01-17T00:06:43.074079929Z" level=info msg="StopPodSandbox for \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\"" Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.153 [WARNING][4736] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.153 [INFO][4736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.153 [INFO][4736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" iface="eth0" netns="" Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.153 [INFO][4736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.153 [INFO][4736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.188 [INFO][4758] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" HandleID="k8s-pod-network.0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.188 [INFO][4758] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.189 [INFO][4758] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.202 [WARNING][4758] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" HandleID="k8s-pod-network.0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.203 [INFO][4758] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" HandleID="k8s-pod-network.0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.205 [INFO][4758] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:43.208672 containerd[1715]: 2026-01-17 00:06:43.206 [INFO][4736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:43.209421 containerd[1715]: time="2026-01-17T00:06:43.208773143Z" level=info msg="TearDown network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\" successfully" Jan 17 00:06:43.209421 containerd[1715]: time="2026-01-17T00:06:43.209150183Z" level=info msg="StopPodSandbox for \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\" returns successfully" Jan 17 00:06:43.209881 containerd[1715]: time="2026-01-17T00:06:43.209855223Z" level=info msg="RemovePodSandbox for \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\"" Jan 17 00:06:43.212084 containerd[1715]: time="2026-01-17T00:06:43.212052904Z" level=info msg="Forcibly stopping sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\"" Jan 17 00:06:43.246477 systemd-networkd[1356]: vxlan.calico: Link UP Jan 17 00:06:43.246486 systemd-networkd[1356]: vxlan.calico: Gained carrier Jan 17 00:06:43.256112 containerd[1715]: time="2026-01-17T00:06:43.250263959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:43.256112 containerd[1715]: time="2026-01-17T00:06:43.255868242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:43.256112 containerd[1715]: time="2026-01-17T00:06:43.255884122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:43.256112 containerd[1715]: time="2026-01-17T00:06:43.255971762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:43.285160 systemd[1]: run-containerd-runc-k8s.io-5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727-runc.V5mkFC.mount: Deactivated successfully. Jan 17 00:06:43.297067 systemd[1]: Started cri-containerd-5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727.scope - libcontainer container 5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727. Jan 17 00:06:43.364969 containerd[1715]: time="2026-01-17T00:06:43.364781445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b946cd94f-7mkrh,Uid:8057ab60-fa20-42e9-a7e5-844713387641,Namespace:calico-system,Attempt:0,} returns sandbox id \"5307f90fdec55c23a30afc8d59a75b00ac11946e3bc68703a8b11849305f1727\"" Jan 17 00:06:43.382572 containerd[1715]: time="2026-01-17T00:06:43.381429892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.284 [WARNING][4774] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.285 [INFO][4774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.285 [INFO][4774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" iface="eth0" netns="" Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.285 [INFO][4774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.285 [INFO][4774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.358 [INFO][4818] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" HandleID="k8s-pod-network.0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.358 [INFO][4818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.359 [INFO][4818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.374 [WARNING][4818] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" HandleID="k8s-pod-network.0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.375 [INFO][4818] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" HandleID="k8s-pod-network.0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-whisker--5f8586bb9c--lqdcp-eth0" Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.378 [INFO][4818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:43.388768 containerd[1715]: 2026-01-17 00:06:43.384 [INFO][4774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c" Jan 17 00:06:43.388768 containerd[1715]: time="2026-01-17T00:06:43.388687695Z" level=info msg="TearDown network for sandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\" successfully" Jan 17 00:06:43.397786 containerd[1715]: time="2026-01-17T00:06:43.397743978Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:06:43.398002 containerd[1715]: time="2026-01-17T00:06:43.397810258Z" level=info msg="RemovePodSandbox \"0a59761481112aff244c7a3b699ffd19ffceb42ab0eaf2ea88a8f7f1a468573c\" returns successfully" Jan 17 00:06:43.664218 containerd[1715]: time="2026-01-17T00:06:43.664114444Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:43.666738 containerd[1715]: time="2026-01-17T00:06:43.666690325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:06:43.666840 containerd[1715]: time="2026-01-17T00:06:43.666798285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:06:43.668773 kubelet[3272]: E0117 00:06:43.668541 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:06:43.668773 kubelet[3272]: E0117 00:06:43.668611 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:06:43.670595 kubelet[3272]: E0117 00:06:43.670464 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bf2168dfdbe84860b95b751791854241,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t77t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b946cd94f-7mkrh_calico-system(8057ab60-fa20-42e9-a7e5-844713387641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:43.673145 containerd[1715]: time="2026-01-17T00:06:43.672899808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:06:43.924856 containerd[1715]: time="2026-01-17T00:06:43.924630668Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:43.928012 containerd[1715]: time="2026-01-17T00:06:43.927896269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:06:43.928012 containerd[1715]: time="2026-01-17T00:06:43.927957709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:06:43.928653 kubelet[3272]: E0117 00:06:43.928107 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:06:43.928653 kubelet[3272]: E0117 00:06:43.928161 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:06:43.928747 kubelet[3272]: E0117 00:06:43.928271 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t77t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b946cd94f-7mkrh_calico-system(8057ab60-fa20-42e9-a7e5-844713387641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:43.930132 kubelet[3272]: E0117 00:06:43.930040 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:06:44.309715 systemd-networkd[1356]: calia5b52ad32b9: Gained IPv6LL Jan 17 00:06:44.326202 kubelet[3272]: E0117 00:06:44.326076 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:06:44.757679 systemd-networkd[1356]: vxlan.calico: Gained IPv6LL Jan 17 00:06:45.044350 containerd[1715]: time="2026-01-17T00:06:45.043973874Z" level=info msg="StopPodSandbox for \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\"" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.096 [INFO][4898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.097 [INFO][4898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" iface="eth0" netns="/var/run/netns/cni-cfddb8aa-9431-153a-31db-9ec98a324410" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.097 [INFO][4898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" iface="eth0" netns="/var/run/netns/cni-cfddb8aa-9431-153a-31db-9ec98a324410" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.097 [INFO][4898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" iface="eth0" netns="/var/run/netns/cni-cfddb8aa-9431-153a-31db-9ec98a324410" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.098 [INFO][4898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.098 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.115 [INFO][4905] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" HandleID="k8s-pod-network.5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.116 [INFO][4905] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.116 [INFO][4905] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.124 [WARNING][4905] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" HandleID="k8s-pod-network.5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.124 [INFO][4905] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" HandleID="k8s-pod-network.5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.125 [INFO][4905] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:45.129060 containerd[1715]: 2026-01-17 00:06:45.127 [INFO][4898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:06:45.131339 containerd[1715]: time="2026-01-17T00:06:45.131299989Z" level=info msg="TearDown network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\" successfully" Jan 17 00:06:45.131339 containerd[1715]: time="2026-01-17T00:06:45.131334229Z" level=info msg="StopPodSandbox for \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\" returns successfully" Jan 17 00:06:45.131813 systemd[1]: run-netns-cni\x2dcfddb8aa\x2d9431\x2d153a\x2d31db\x2d9ec98a324410.mount: Deactivated successfully. Jan 17 00:06:45.143438 containerd[1715]: time="2026-01-17T00:06:45.143403993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vtx75,Uid:e2e5377f-9c87-4d0a-b448-a7595a3af9ad,Namespace:calico-system,Attempt:1,}" Jan 17 00:06:45.281349 systemd-networkd[1356]: calia41bb928959: Link UP Jan 17 00:06:45.282800 systemd-networkd[1356]: calia41bb928959: Gained carrier Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.217 [INFO][4912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0 goldmane-666569f655- calico-system e2e5377f-9c87-4d0a-b448-a7595a3af9ad 1004 0 2026-01-17 00:06:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-f5e0a482e1 goldmane-666569f655-vtx75 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia41bb928959 [] [] }} ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Namespace="calico-system" Pod="goldmane-666569f655-vtx75" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.217 [INFO][4912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Namespace="calico-system" Pod="goldmane-666569f655-vtx75" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.240 [INFO][4924] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" HandleID="k8s-pod-network.4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.240 [INFO][4924] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" HandleID="k8s-pod-network.4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f5e0a482e1", "pod":"goldmane-666569f655-vtx75", "timestamp":"2026-01-17 00:06:45.240699352 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f5e0a482e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.240 [INFO][4924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.240 [INFO][4924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.240 [INFO][4924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f5e0a482e1' Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.249 [INFO][4924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.253 [INFO][4924] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.257 [INFO][4924] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.258 [INFO][4924] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.261 [INFO][4924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.261 [INFO][4924] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.262 [INFO][4924] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.266 [INFO][4924] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.275 [INFO][4924] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.130/26] block=192.168.38.128/26 handle="k8s-pod-network.4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.276 [INFO][4924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.130/26] handle="k8s-pod-network.4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.276 [INFO][4924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:45.300200 containerd[1715]: 2026-01-17 00:06:45.276 [INFO][4924] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.130/26] IPv6=[] ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" HandleID="k8s-pod-network.4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.300724 containerd[1715]: 2026-01-17 00:06:45.278 [INFO][4912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Namespace="calico-system" Pod="goldmane-666569f655-vtx75" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e2e5377f-9c87-4d0a-b448-a7595a3af9ad", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"", Pod:"goldmane-666569f655-vtx75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia41bb928959", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:45.300724 containerd[1715]: 2026-01-17 00:06:45.278 [INFO][4912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.130/32] ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Namespace="calico-system" Pod="goldmane-666569f655-vtx75" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.300724 containerd[1715]: 2026-01-17 00:06:45.278 [INFO][4912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia41bb928959 ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Namespace="calico-system" Pod="goldmane-666569f655-vtx75" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.300724 containerd[1715]: 2026-01-17 00:06:45.282 [INFO][4912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Namespace="calico-system" Pod="goldmane-666569f655-vtx75" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.300724 containerd[1715]: 2026-01-17 00:06:45.282 [INFO][4912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Namespace="calico-system" Pod="goldmane-666569f655-vtx75" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e2e5377f-9c87-4d0a-b448-a7595a3af9ad", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d", Pod:"goldmane-666569f655-vtx75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia41bb928959", MAC:"c2:39:b7:3b:f9:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:45.300724 containerd[1715]: 2026-01-17 00:06:45.296 [INFO][4912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d" Namespace="calico-system" Pod="goldmane-666569f655-vtx75" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:06:45.326095 containerd[1715]: time="2026-01-17T00:06:45.325131466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:45.326095 containerd[1715]: time="2026-01-17T00:06:45.325187826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:45.326095 containerd[1715]: time="2026-01-17T00:06:45.325198146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:45.326095 containerd[1715]: time="2026-01-17T00:06:45.325266826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:45.334156 kubelet[3272]: E0117 00:06:45.333721 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:06:45.352713 systemd[1]: Started cri-containerd-4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d.scope - libcontainer container 4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d. Jan 17 00:06:45.390167 containerd[1715]: time="2026-01-17T00:06:45.390130132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vtx75,Uid:e2e5377f-9c87-4d0a-b448-a7595a3af9ad,Namespace:calico-system,Attempt:1,} returns sandbox id \"4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d\"" Jan 17 00:06:45.391566 containerd[1715]: time="2026-01-17T00:06:45.391494372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:06:45.633644 containerd[1715]: time="2026-01-17T00:06:45.633428469Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:45.635778 containerd[1715]: time="2026-01-17T00:06:45.635705869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:06:45.635778 containerd[1715]: time="2026-01-17T00:06:45.635752149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:06:45.636063 kubelet[3272]: E0117 00:06:45.636027 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:06:45.636133 kubelet[3272]: E0117 00:06:45.636077 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:06:45.636542 kubelet[3272]: E0117 00:06:45.636212 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzksz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vtx75_calico-system(e2e5377f-9c87-4d0a-b448-a7595a3af9ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:45.637834 kubelet[3272]: E0117 00:06:45.637802 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:06:46.334587 kubelet[3272]: E0117 00:06:46.334425 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:06:46.549716 systemd-networkd[1356]: calia41bb928959: Gained IPv6LL Jan 17 00:06:47.044919 containerd[1715]: time="2026-01-17T00:06:47.043322350Z" level=info msg="StopPodSandbox for \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\"" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.086 [INFO][4991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.086 [INFO][4991] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" iface="eth0" netns="/var/run/netns/cni-313a235c-0638-ead6-0ed6-13fd4cbd9503" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.087 [INFO][4991] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" iface="eth0" netns="/var/run/netns/cni-313a235c-0638-ead6-0ed6-13fd4cbd9503" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.089 [INFO][4991] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" iface="eth0" netns="/var/run/netns/cni-313a235c-0638-ead6-0ed6-13fd4cbd9503" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.089 [INFO][4991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.089 [INFO][4991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.110 [INFO][4998] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" HandleID="k8s-pod-network.af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.110 [INFO][4998] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.110 [INFO][4998] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.123 [WARNING][4998] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" HandleID="k8s-pod-network.af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.123 [INFO][4998] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" HandleID="k8s-pod-network.af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.125 [INFO][4998] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:47.130724 containerd[1715]: 2026-01-17 00:06:47.128 [INFO][4991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:06:47.132582 containerd[1715]: time="2026-01-17T00:06:47.132542826Z" level=info msg="TearDown network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\" successfully" Jan 17 00:06:47.132582 containerd[1715]: time="2026-01-17T00:06:47.132576986Z" level=info msg="StopPodSandbox for \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\" returns successfully" Jan 17 00:06:47.133332 containerd[1715]: time="2026-01-17T00:06:47.133300826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n869n,Uid:acb3da73-aad2-4399-b6f1-7f3c1a0d99c5,Namespace:kube-system,Attempt:1,}" Jan 17 00:06:47.134032 systemd[1]: run-netns-cni\x2d313a235c\x2d0638\x2dead6\x2d0ed6\x2d13fd4cbd9503.mount: Deactivated successfully. Jan 17 00:06:47.262576 systemd-networkd[1356]: calie0fb3cbee3a: Link UP Jan 17 00:06:47.262733 systemd-networkd[1356]: calie0fb3cbee3a: Gained carrier Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.195 [INFO][5005] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0 coredns-674b8bbfcf- kube-system acb3da73-aad2-4399-b6f1-7f3c1a0d99c5 1027 0 2026-01-17 00:05:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-f5e0a482e1 coredns-674b8bbfcf-n869n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0fb3cbee3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-n869n" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.195 [INFO][5005] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-n869n" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.224 [INFO][5018] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" HandleID="k8s-pod-network.4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.225 [INFO][5018] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" HandleID="k8s-pod-network.4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-f5e0a482e1", "pod":"coredns-674b8bbfcf-n869n", "timestamp":"2026-01-17 00:06:47.224933862 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f5e0a482e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.225 [INFO][5018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.225 [INFO][5018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.225 [INFO][5018] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f5e0a482e1' Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.233 [INFO][5018] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.237 [INFO][5018] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.240 [INFO][5018] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.242 [INFO][5018] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.243 [INFO][5018] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.243 [INFO][5018] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.244 [INFO][5018] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86 Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.248 [INFO][5018] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.256 [INFO][5018] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.131/26] block=192.168.38.128/26 handle="k8s-pod-network.4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.256 [INFO][5018] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.131/26] handle="k8s-pod-network.4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.256 [INFO][5018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:47.280942 containerd[1715]: 2026-01-17 00:06:47.256 [INFO][5018] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.131/26] IPv6=[] ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" HandleID="k8s-pod-network.4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.281797 containerd[1715]: 2026-01-17 00:06:47.258 [INFO][5005] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-n869n" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acb3da73-aad2-4399-b6f1-7f3c1a0d99c5", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"", Pod:"coredns-674b8bbfcf-n869n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fb3cbee3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:47.281797 containerd[1715]: 2026-01-17 00:06:47.258 [INFO][5005] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.131/32] ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-n869n" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.281797 containerd[1715]: 2026-01-17 00:06:47.258 [INFO][5005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0fb3cbee3a ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-n869n" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.281797 containerd[1715]: 2026-01-17 00:06:47.261 [INFO][5005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-n869n" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.281797 containerd[1715]: 2026-01-17 00:06:47.263 [INFO][5005] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-n869n" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acb3da73-aad2-4399-b6f1-7f3c1a0d99c5", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86", Pod:"coredns-674b8bbfcf-n869n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fb3cbee3a", MAC:"32:4e:8a:99:86:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:47.281797 containerd[1715]: 2026-01-17 00:06:47.277 [INFO][5005] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-n869n" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:06:47.307450 containerd[1715]: time="2026-01-17T00:06:47.307283815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:47.308820 containerd[1715]: time="2026-01-17T00:06:47.307721775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:47.308949 containerd[1715]: time="2026-01-17T00:06:47.308910656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:47.309247 containerd[1715]: time="2026-01-17T00:06:47.309174816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:47.334682 systemd[1]: Started cri-containerd-4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86.scope - libcontainer container 4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86. Jan 17 00:06:47.337324 kubelet[3272]: E0117 00:06:47.337250 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:06:47.379575 containerd[1715]: time="2026-01-17T00:06:47.379481804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n869n,Uid:acb3da73-aad2-4399-b6f1-7f3c1a0d99c5,Namespace:kube-system,Attempt:1,} returns sandbox id \"4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86\"" Jan 17 00:06:47.388095 containerd[1715]: time="2026-01-17T00:06:47.388056447Z" level=info msg="CreateContainer within sandbox \"4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:06:47.413111 containerd[1715]: time="2026-01-17T00:06:47.413065817Z" level=info msg="CreateContainer within sandbox \"4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7300f6aff1bc8884fd5c8efe5dc6cc3d9faacce571b8efb3a14c0c6fcccf21e\"" Jan 17 00:06:47.414569 containerd[1715]: time="2026-01-17T00:06:47.413568737Z" level=info msg="StartContainer for \"f7300f6aff1bc8884fd5c8efe5dc6cc3d9faacce571b8efb3a14c0c6fcccf21e\"" Jan 17 00:06:47.439679 systemd[1]: Started cri-containerd-f7300f6aff1bc8884fd5c8efe5dc6cc3d9faacce571b8efb3a14c0c6fcccf21e.scope - libcontainer container f7300f6aff1bc8884fd5c8efe5dc6cc3d9faacce571b8efb3a14c0c6fcccf21e. Jan 17 00:06:47.466752 containerd[1715]: time="2026-01-17T00:06:47.466674279Z" level=info msg="StartContainer for \"f7300f6aff1bc8884fd5c8efe5dc6cc3d9faacce571b8efb3a14c0c6fcccf21e\" returns successfully" Jan 17 00:06:48.043941 containerd[1715]: time="2026-01-17T00:06:48.043831583Z" level=info msg="StopPodSandbox for \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\"" Jan 17 00:06:48.044367 containerd[1715]: time="2026-01-17T00:06:48.044154303Z" level=info msg="StopPodSandbox for \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\"" Jan 17 00:06:48.046652 containerd[1715]: time="2026-01-17T00:06:48.045812063Z" level=info msg="StopPodSandbox for \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\"" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.137 [INFO][5137] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.138 [INFO][5137] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" iface="eth0" netns="/var/run/netns/cni-093b2caf-f82d-fb03-0814-321a0d07d7ac" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.138 [INFO][5137] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" iface="eth0" netns="/var/run/netns/cni-093b2caf-f82d-fb03-0814-321a0d07d7ac" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.139 [INFO][5137] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" iface="eth0" netns="/var/run/netns/cni-093b2caf-f82d-fb03-0814-321a0d07d7ac" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.139 [INFO][5137] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.139 [INFO][5137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.163 [INFO][5162] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" HandleID="k8s-pod-network.c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.163 [INFO][5162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.163 [INFO][5162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.177 [WARNING][5162] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" HandleID="k8s-pod-network.c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.177 [INFO][5162] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" HandleID="k8s-pod-network.c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.178 [INFO][5162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:48.184990 containerd[1715]: 2026-01-17 00:06:48.182 [INFO][5137] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:06:48.187737 containerd[1715]: time="2026-01-17T00:06:48.185118914Z" level=info msg="TearDown network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\" successfully" Jan 17 00:06:48.187737 containerd[1715]: time="2026-01-17T00:06:48.185145074Z" level=info msg="StopPodSandbox for \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\" returns successfully" Jan 17 00:06:48.188145 containerd[1715]: time="2026-01-17T00:06:48.187959515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf786874-qhq5d,Uid:9340ab9f-05b7-44f8-b60d-bcae76bd89d3,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:06:48.189050 systemd[1]: run-netns-cni\x2d093b2caf\x2df82d\x2dfb03\x2d0814\x2d321a0d07d7ac.mount: Deactivated successfully. Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.110 [INFO][5136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.111 [INFO][5136] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" iface="eth0" netns="/var/run/netns/cni-ab9a79b9-603b-99a8-f66c-4d8077a68cd7" Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.111 [INFO][5136] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" iface="eth0" netns="/var/run/netns/cni-ab9a79b9-603b-99a8-f66c-4d8077a68cd7" Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.112 [INFO][5136] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" iface="eth0" netns="/var/run/netns/cni-ab9a79b9-603b-99a8-f66c-4d8077a68cd7" Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.112 [INFO][5136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.112 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.165 [INFO][5156] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" HandleID="k8s-pod-network.4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.165 [INFO][5156] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.179 [INFO][5156] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.193 [WARNING][5156] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" HandleID="k8s-pod-network.4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.193 [INFO][5156] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" HandleID="k8s-pod-network.4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.195 [INFO][5156] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:48.201661 containerd[1715]: 2026-01-17 00:06:48.198 [INFO][5136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:06:48.204230 containerd[1715]: time="2026-01-17T00:06:48.201832800Z" level=info msg="TearDown network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\" successfully" Jan 17 00:06:48.204230 containerd[1715]: time="2026-01-17T00:06:48.201866280Z" level=info msg="StopPodSandbox for \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\" returns successfully" Jan 17 00:06:48.204438 systemd[1]: run-netns-cni\x2dab9a79b9\x2d603b\x2d99a8\x2df66c\x2d4d8077a68cd7.mount: Deactivated successfully. Jan 17 00:06:48.206792 containerd[1715]: time="2026-01-17T00:06:48.206721922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pgmjl,Uid:616526de-5a58-4998-9f29-2aa2e02e1a8e,Namespace:kube-system,Attempt:1,}" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.133 [INFO][5143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.136 [INFO][5143] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" iface="eth0" netns="/var/run/netns/cni-cdba2acc-8083-25bf-61cb-b5b45cd437cd" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.140 [INFO][5143] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" iface="eth0" netns="/var/run/netns/cni-cdba2acc-8083-25bf-61cb-b5b45cd437cd" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.140 [INFO][5143] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" iface="eth0" netns="/var/run/netns/cni-cdba2acc-8083-25bf-61cb-b5b45cd437cd" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.140 [INFO][5143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.140 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.181 [INFO][5164] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" HandleID="k8s-pod-network.a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.181 [INFO][5164] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.195 [INFO][5164] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.209 [WARNING][5164] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" HandleID="k8s-pod-network.a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.209 [INFO][5164] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" HandleID="k8s-pod-network.a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.211 [INFO][5164] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:48.215185 containerd[1715]: 2026-01-17 00:06:48.213 [INFO][5143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:06:48.217920 containerd[1715]: time="2026-01-17T00:06:48.215319165Z" level=info msg="TearDown network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\" successfully" Jan 17 00:06:48.217920 containerd[1715]: time="2026-01-17T00:06:48.215344045Z" level=info msg="StopPodSandbox for \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\" returns successfully" Jan 17 00:06:48.217920 containerd[1715]: time="2026-01-17T00:06:48.216118045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ddb45bfc-grgqw,Uid:e747a046-268c-4a51-81e2-3f445b48b5cd,Namespace:calico-system,Attempt:1,}" Jan 17 00:06:48.219648 systemd[1]: run-netns-cni\x2dcdba2acc\x2d8083\x2d25bf\x2d61cb\x2db5b45cd437cd.mount: Deactivated successfully. Jan 17 00:06:48.366875 kubelet[3272]: I0117 00:06:48.365793 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n869n" podStartSLOduration=61.36577446 podStartE2EDuration="1m1.36577446s" podCreationTimestamp="2026-01-17 00:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:06:48.365152339 +0000 UTC m=+65.438098181" watchObservedRunningTime="2026-01-17 00:06:48.36577446 +0000 UTC m=+65.438720302" Jan 17 00:06:48.407572 systemd-networkd[1356]: calie0fb3cbee3a: Gained IPv6LL Jan 17 00:06:48.457345 systemd-networkd[1356]: cali43e192590d3: Link UP Jan 17 00:06:48.459340 systemd-networkd[1356]: cali43e192590d3: Gained carrier Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.303 [INFO][5177] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0 coredns-674b8bbfcf- kube-system 616526de-5a58-4998-9f29-2aa2e02e1a8e 1041 0 2026-01-17 00:05:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-f5e0a482e1 coredns-674b8bbfcf-pgmjl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43e192590d3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pgmjl" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.303 [INFO][5177] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pgmjl" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.349 [INFO][5208] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" HandleID="k8s-pod-network.2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.350 [INFO][5208] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" HandleID="k8s-pod-network.2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3b10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-f5e0a482e1", "pod":"coredns-674b8bbfcf-pgmjl", "timestamp":"2026-01-17 00:06:48.349868774 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f5e0a482e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.350 [INFO][5208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.350 [INFO][5208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.350 [INFO][5208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f5e0a482e1' Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.367 [INFO][5208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.383 [INFO][5208] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.408 [INFO][5208] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.418 [INFO][5208] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.422 [INFO][5208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.422 [INFO][5208] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.426 [INFO][5208] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.438 [INFO][5208] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.447 [INFO][5208] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.132/26] block=192.168.38.128/26 handle="k8s-pod-network.2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.447 [INFO][5208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.132/26] handle="k8s-pod-network.2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.447 [INFO][5208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:48.493577 containerd[1715]: 2026-01-17 00:06:48.447 [INFO][5208] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.132/26] IPv6=[] ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" HandleID="k8s-pod-network.2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.494167 containerd[1715]: 2026-01-17 00:06:48.451 [INFO][5177] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pgmjl" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"616526de-5a58-4998-9f29-2aa2e02e1a8e", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"", Pod:"coredns-674b8bbfcf-pgmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43e192590d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:48.494167 containerd[1715]: 2026-01-17 00:06:48.451 [INFO][5177] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.132/32] ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pgmjl" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.494167 containerd[1715]: 2026-01-17 00:06:48.451 [INFO][5177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43e192590d3 ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pgmjl" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.494167 containerd[1715]: 2026-01-17 00:06:48.461 [INFO][5177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pgmjl" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.494167 containerd[1715]: 2026-01-17 00:06:48.464 [INFO][5177] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pgmjl" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"616526de-5a58-4998-9f29-2aa2e02e1a8e", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f", Pod:"coredns-674b8bbfcf-pgmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43e192590d3", MAC:"e6:d2:a5:2f:42:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:48.494167 containerd[1715]: 2026-01-17 00:06:48.489 [INFO][5177] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pgmjl" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:06:48.517840 containerd[1715]: time="2026-01-17T00:06:48.517758995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:48.517964 containerd[1715]: time="2026-01-17T00:06:48.517865875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:48.517964 containerd[1715]: time="2026-01-17T00:06:48.517883915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:48.518555 containerd[1715]: time="2026-01-17T00:06:48.518056275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:48.532268 systemd-networkd[1356]: cali3f70a62a74b: Link UP Jan 17 00:06:48.533978 systemd-networkd[1356]: cali3f70a62a74b: Gained carrier Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.353 [INFO][5196] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0 calico-kube-controllers-68ddb45bfc- calico-system e747a046-268c-4a51-81e2-3f445b48b5cd 1042 0 2026-01-17 00:06:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68ddb45bfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-f5e0a482e1 calico-kube-controllers-68ddb45bfc-grgqw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3f70a62a74b [] [] }} ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Namespace="calico-system" Pod="calico-kube-controllers-68ddb45bfc-grgqw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.353 [INFO][5196] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Namespace="calico-system" Pod="calico-kube-controllers-68ddb45bfc-grgqw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.419 [INFO][5219] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" HandleID="k8s-pod-network.98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.420 [INFO][5219] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" HandleID="k8s-pod-network.98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f5e0a482e1", "pod":"calico-kube-controllers-68ddb45bfc-grgqw", "timestamp":"2026-01-17 00:06:48.419790439 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f5e0a482e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.420 [INFO][5219] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.447 [INFO][5219] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.448 [INFO][5219] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f5e0a482e1' Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.469 [INFO][5219] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.488 [INFO][5219] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.496 [INFO][5219] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.498 [INFO][5219] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.501 [INFO][5219] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.501 [INFO][5219] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.502 [INFO][5219] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.509 [INFO][5219] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.523 [INFO][5219] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.133/26] block=192.168.38.128/26 handle="k8s-pod-network.98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.523 [INFO][5219] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.133/26] handle="k8s-pod-network.98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.523 [INFO][5219] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:48.551407 containerd[1715]: 2026-01-17 00:06:48.523 [INFO][5219] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.133/26] IPv6=[] ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" HandleID="k8s-pod-network.98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.552287 containerd[1715]: 2026-01-17 00:06:48.528 [INFO][5196] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Namespace="calico-system" Pod="calico-kube-controllers-68ddb45bfc-grgqw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0", GenerateName:"calico-kube-controllers-68ddb45bfc-", Namespace:"calico-system", SelfLink:"", UID:"e747a046-268c-4a51-81e2-3f445b48b5cd", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ddb45bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"", Pod:"calico-kube-controllers-68ddb45bfc-grgqw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f70a62a74b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:48.552287 containerd[1715]: 2026-01-17 00:06:48.528 [INFO][5196] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.133/32] ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Namespace="calico-system" Pod="calico-kube-controllers-68ddb45bfc-grgqw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.552287 containerd[1715]: 2026-01-17 00:06:48.528 [INFO][5196] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f70a62a74b ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Namespace="calico-system" Pod="calico-kube-controllers-68ddb45bfc-grgqw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.552287 containerd[1715]: 2026-01-17 00:06:48.533 [INFO][5196] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Namespace="calico-system" Pod="calico-kube-controllers-68ddb45bfc-grgqw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.552287 containerd[1715]: 2026-01-17 00:06:48.535 [INFO][5196] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Namespace="calico-system" Pod="calico-kube-controllers-68ddb45bfc-grgqw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0", GenerateName:"calico-kube-controllers-68ddb45bfc-", Namespace:"calico-system", SelfLink:"", UID:"e747a046-268c-4a51-81e2-3f445b48b5cd", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ddb45bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c", Pod:"calico-kube-controllers-68ddb45bfc-grgqw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f70a62a74b", MAC:"c6:41:df:cb:11:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:48.552287 containerd[1715]: 2026-01-17 00:06:48.547 [INFO][5196] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c" Namespace="calico-system" Pod="calico-kube-controllers-68ddb45bfc-grgqw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:06:48.562713 systemd[1]: Started cri-containerd-2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f.scope - libcontainer container 2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f. Jan 17 00:06:48.588447 containerd[1715]: time="2026-01-17T00:06:48.588305061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:48.589318 containerd[1715]: time="2026-01-17T00:06:48.588848661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:48.589318 containerd[1715]: time="2026-01-17T00:06:48.589181661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:48.589318 containerd[1715]: time="2026-01-17T00:06:48.589279941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:48.611689 systemd[1]: Started cri-containerd-98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c.scope - libcontainer container 98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c. Jan 17 00:06:48.619618 containerd[1715]: time="2026-01-17T00:06:48.619431552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pgmjl,Uid:616526de-5a58-4998-9f29-2aa2e02e1a8e,Namespace:kube-system,Attempt:1,} returns sandbox id \"2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f\"" Jan 17 00:06:48.633928 containerd[1715]: time="2026-01-17T00:06:48.633783037Z" level=info msg="CreateContainer within sandbox \"2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:06:48.659677 systemd-networkd[1356]: cali57c0a4a80e8: Link UP Jan 17 00:06:48.669625 systemd-networkd[1356]: cali57c0a4a80e8: Gained carrier Jan 17 00:06:48.693942 containerd[1715]: time="2026-01-17T00:06:48.693301299Z" level=info msg="CreateContainer within sandbox \"2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd8790a0f2dbbb9f1a7e7a0bc9f3664568b05b5a246884512ed067b929ea5901\"" Jan 17 00:06:48.698244 containerd[1715]: time="2026-01-17T00:06:48.698036421Z" level=info msg="StartContainer for \"bd8790a0f2dbbb9f1a7e7a0bc9f3664568b05b5a246884512ed067b929ea5901\"" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.365 [INFO][5192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0 calico-apiserver-77bf786874- calico-apiserver 9340ab9f-05b7-44f8-b60d-bcae76bd89d3 1043 0 2026-01-17 00:05:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77bf786874 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-f5e0a482e1 calico-apiserver-77bf786874-qhq5d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali57c0a4a80e8 [] [] }} ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-qhq5d" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.365 [INFO][5192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-qhq5d" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.441 [INFO][5225] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" HandleID="k8s-pod-network.aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.441 [INFO][5225] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" HandleID="k8s-pod-network.aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-f5e0a482e1", "pod":"calico-apiserver-77bf786874-qhq5d", "timestamp":"2026-01-17 00:06:48.441386807 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f5e0a482e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.441 [INFO][5225] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.525 [INFO][5225] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.525 [INFO][5225] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f5e0a482e1' Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.569 [INFO][5225] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.584 [INFO][5225] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.606 [INFO][5225] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.617 [INFO][5225] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.625 [INFO][5225] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.628 [INFO][5225] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.632 [INFO][5225] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0 Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.637 [INFO][5225] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.647 [INFO][5225] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.134/26] block=192.168.38.128/26 handle="k8s-pod-network.aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.647 [INFO][5225] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.134/26] handle="k8s-pod-network.aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.647 [INFO][5225] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:48.705316 containerd[1715]: 2026-01-17 00:06:48.647 [INFO][5225] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.134/26] IPv6=[] ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" HandleID="k8s-pod-network.aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.706316 containerd[1715]: 2026-01-17 00:06:48.654 [INFO][5192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-qhq5d" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0", GenerateName:"calico-apiserver-77bf786874-", Namespace:"calico-apiserver", SelfLink:"", UID:"9340ab9f-05b7-44f8-b60d-bcae76bd89d3", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf786874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"", Pod:"calico-apiserver-77bf786874-qhq5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57c0a4a80e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:48.706316 containerd[1715]: 2026-01-17 00:06:48.654 [INFO][5192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.134/32] ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-qhq5d" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.706316 containerd[1715]: 2026-01-17 00:06:48.654 [INFO][5192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57c0a4a80e8 ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-qhq5d" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.706316 containerd[1715]: 2026-01-17 00:06:48.673 [INFO][5192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-qhq5d" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.706316 containerd[1715]: 2026-01-17 00:06:48.677 [INFO][5192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-qhq5d" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0", GenerateName:"calico-apiserver-77bf786874-", Namespace:"calico-apiserver", SelfLink:"", UID:"9340ab9f-05b7-44f8-b60d-bcae76bd89d3", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf786874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0", Pod:"calico-apiserver-77bf786874-qhq5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57c0a4a80e8", MAC:"b2:60:2e:b7:65:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:48.706316 containerd[1715]: 2026-01-17 00:06:48.696 [INFO][5192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-qhq5d" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:06:48.751945 containerd[1715]: time="2026-01-17T00:06:48.750957360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:48.751945 containerd[1715]: time="2026-01-17T00:06:48.751008040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:48.751945 containerd[1715]: time="2026-01-17T00:06:48.751037280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:48.752805 containerd[1715]: time="2026-01-17T00:06:48.752675760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:48.760141 systemd[1]: Started cri-containerd-bd8790a0f2dbbb9f1a7e7a0bc9f3664568b05b5a246884512ed067b929ea5901.scope - libcontainer container bd8790a0f2dbbb9f1a7e7a0bc9f3664568b05b5a246884512ed067b929ea5901. Jan 17 00:06:48.760631 containerd[1715]: time="2026-01-17T00:06:48.759237563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68ddb45bfc-grgqw,Uid:e747a046-268c-4a51-81e2-3f445b48b5cd,Namespace:calico-system,Attempt:1,} returns sandbox id \"98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c\"" Jan 17 00:06:48.765537 containerd[1715]: time="2026-01-17T00:06:48.765414205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:06:48.786683 systemd[1]: Started cri-containerd-aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0.scope - libcontainer container aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0. Jan 17 00:06:48.811113 containerd[1715]: time="2026-01-17T00:06:48.811068982Z" level=info msg="StartContainer for \"bd8790a0f2dbbb9f1a7e7a0bc9f3664568b05b5a246884512ed067b929ea5901\" returns successfully" Jan 17 00:06:48.832305 containerd[1715]: time="2026-01-17T00:06:48.832264109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf786874-qhq5d,Uid:9340ab9f-05b7-44f8-b60d-bcae76bd89d3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0\"" Jan 17 00:06:49.005797 containerd[1715]: time="2026-01-17T00:06:49.005747132Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:49.008170 containerd[1715]: time="2026-01-17T00:06:49.008087773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:06:49.008170 containerd[1715]: time="2026-01-17T00:06:49.008142133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:06:49.008311 kubelet[3272]: E0117 00:06:49.008279 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:06:49.008397 kubelet[3272]: E0117 00:06:49.008324 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:06:49.008631 kubelet[3272]: E0117 00:06:49.008585 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r22zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68ddb45bfc-grgqw_calico-system(e747a046-268c-4a51-81e2-3f445b48b5cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:49.009238 containerd[1715]: time="2026-01-17T00:06:49.009034814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:06:49.010267 kubelet[3272]: E0117 00:06:49.010200 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:06:49.045420 containerd[1715]: time="2026-01-17T00:06:49.045115267Z" level=info msg="StopPodSandbox for \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\"" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.095 [INFO][5442] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.095 [INFO][5442] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" iface="eth0" netns="/var/run/netns/cni-5cb741a2-3845-96dc-62f6-b9366cf546fe" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.095 [INFO][5442] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" iface="eth0" netns="/var/run/netns/cni-5cb741a2-3845-96dc-62f6-b9366cf546fe" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.095 [INFO][5442] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" iface="eth0" netns="/var/run/netns/cni-5cb741a2-3845-96dc-62f6-b9366cf546fe" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.095 [INFO][5442] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.096 [INFO][5442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.112 [INFO][5450] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" HandleID="k8s-pod-network.18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.112 [INFO][5450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.112 [INFO][5450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.121 [WARNING][5450] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" HandleID="k8s-pod-network.18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.121 [INFO][5450] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" HandleID="k8s-pod-network.18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.123 [INFO][5450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:49.126582 containerd[1715]: 2026-01-17 00:06:49.124 [INFO][5442] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:06:49.126582 containerd[1715]: time="2026-01-17T00:06:49.126492896Z" level=info msg="TearDown network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\" successfully" Jan 17 00:06:49.126582 containerd[1715]: time="2026-01-17T00:06:49.126518816Z" level=info msg="StopPodSandbox for \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\" returns successfully" Jan 17 00:06:49.128356 containerd[1715]: time="2026-01-17T00:06:49.127988337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf786874-gphpw,Uid:6e9ff54d-9f3a-4f62-92e0-56921b0f16ea,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:06:49.142999 systemd[1]: run-netns-cni\x2d5cb741a2\x2d3845\x2d96dc\x2d62f6\x2db9366cf546fe.mount: Deactivated successfully. Jan 17 00:06:49.248732 systemd-networkd[1356]: calid7496ae7306: Link UP Jan 17 00:06:49.248937 systemd-networkd[1356]: calid7496ae7306: Gained carrier Jan 17 00:06:49.263703 containerd[1715]: time="2026-01-17T00:06:49.263408306Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.187 [INFO][5457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0 calico-apiserver-77bf786874- calico-apiserver 6e9ff54d-9f3a-4f62-92e0-56921b0f16ea 1072 0 2026-01-17 00:05:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77bf786874 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-f5e0a482e1 calico-apiserver-77bf786874-gphpw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid7496ae7306 [] [] }} ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-gphpw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.187 [INFO][5457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-gphpw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.208 [INFO][5468] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" HandleID="k8s-pod-network.a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.208 [INFO][5468] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" HandleID="k8s-pod-network.a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-f5e0a482e1", "pod":"calico-apiserver-77bf786874-gphpw", "timestamp":"2026-01-17 00:06:49.208137446 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f5e0a482e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.208 [INFO][5468] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.208 [INFO][5468] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.208 [INFO][5468] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f5e0a482e1' Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.216 [INFO][5468] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.220 [INFO][5468] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.223 [INFO][5468] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.225 [INFO][5468] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.227 [INFO][5468] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.227 [INFO][5468] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.229 [INFO][5468] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.235 [INFO][5468] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.243 [INFO][5468] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.135/26] block=192.168.38.128/26 handle="k8s-pod-network.a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.243 [INFO][5468] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.135/26] handle="k8s-pod-network.a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.243 [INFO][5468] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:49.264938 containerd[1715]: 2026-01-17 00:06:49.243 [INFO][5468] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.135/26] IPv6=[] ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" HandleID="k8s-pod-network.a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.266194 containerd[1715]: 2026-01-17 00:06:49.245 [INFO][5457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-gphpw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0", GenerateName:"calico-apiserver-77bf786874-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e9ff54d-9f3a-4f62-92e0-56921b0f16ea", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf786874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"", Pod:"calico-apiserver-77bf786874-gphpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7496ae7306", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:49.266194 containerd[1715]: 2026-01-17 00:06:49.245 [INFO][5457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.135/32] ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-gphpw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.266194 containerd[1715]: 2026-01-17 00:06:49.246 [INFO][5457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7496ae7306 ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-gphpw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.266194 containerd[1715]: 2026-01-17 00:06:49.249 [INFO][5457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-gphpw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.266194 containerd[1715]: 2026-01-17 00:06:49.249 [INFO][5457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-gphpw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0", GenerateName:"calico-apiserver-77bf786874-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e9ff54d-9f3a-4f62-92e0-56921b0f16ea", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf786874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c", Pod:"calico-apiserver-77bf786874-gphpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7496ae7306", MAC:"ae:2d:34:55:0d:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:49.266194 containerd[1715]: 2026-01-17 00:06:49.259 [INFO][5457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c" Namespace="calico-apiserver" Pod="calico-apiserver-77bf786874-gphpw" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:06:49.266194 containerd[1715]: time="2026-01-17T00:06:49.265956707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:06:49.266194 containerd[1715]: time="2026-01-17T00:06:49.266014187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:06:49.267737 kubelet[3272]: E0117 00:06:49.266756 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:06:49.267737 kubelet[3272]: E0117 00:06:49.266798 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:06:49.267737 kubelet[3272]: E0117 00:06:49.266920 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmd54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf786874-qhq5d_calico-apiserver(9340ab9f-05b7-44f8-b60d-bcae76bd89d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:49.268346 kubelet[3272]: E0117 00:06:49.268292 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:06:49.285868 containerd[1715]: time="2026-01-17T00:06:49.285776794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:49.285988 containerd[1715]: time="2026-01-17T00:06:49.285912914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:49.285988 containerd[1715]: time="2026-01-17T00:06:49.285948154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:49.286148 containerd[1715]: time="2026-01-17T00:06:49.286044234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:49.314701 systemd[1]: Started cri-containerd-a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c.scope - libcontainer container a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c. Jan 17 00:06:49.345107 containerd[1715]: time="2026-01-17T00:06:49.345065576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bf786874-gphpw,Uid:6e9ff54d-9f3a-4f62-92e0-56921b0f16ea,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c\"" Jan 17 00:06:49.347372 kubelet[3272]: E0117 00:06:49.347305 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:06:49.348033 containerd[1715]: time="2026-01-17T00:06:49.348000697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:06:49.352058 kubelet[3272]: E0117 00:06:49.351925 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:06:49.404575 kubelet[3272]: I0117 00:06:49.403685 3272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pgmjl" podStartSLOduration=62.403666557 podStartE2EDuration="1m2.403666557s" podCreationTimestamp="2026-01-17 00:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:06:49.38360079 +0000 UTC m=+66.456546672" watchObservedRunningTime="2026-01-17 00:06:49.403666557 +0000 UTC m=+66.476612439" Jan 17 00:06:49.600119 containerd[1715]: time="2026-01-17T00:06:49.600000709Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:49.603240 containerd[1715]: time="2026-01-17T00:06:49.603193310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:06:49.603304 containerd[1715]: time="2026-01-17T00:06:49.603285910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:06:49.603688 kubelet[3272]: E0117 00:06:49.603437 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:06:49.603688 kubelet[3272]: E0117 00:06:49.603484 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:06:49.603688 kubelet[3272]: E0117 00:06:49.603633 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5ndq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf786874-gphpw_calico-apiserver(6e9ff54d-9f3a-4f62-92e0-56921b0f16ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:49.605087 kubelet[3272]: E0117 00:06:49.604902 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:06:49.685672 systemd-networkd[1356]: cali43e192590d3: Gained IPv6LL Jan 17 00:06:49.749714 systemd-networkd[1356]: cali57c0a4a80e8: Gained IPv6LL Jan 17 00:06:50.043437 containerd[1715]: time="2026-01-17T00:06:50.043184710Z" level=info msg="StopPodSandbox for \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\"" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.089 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.089 [INFO][5536] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" iface="eth0" netns="/var/run/netns/cni-c0e5f841-7383-f246-9280-c24ac640ae9a" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.090 [INFO][5536] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" iface="eth0" netns="/var/run/netns/cni-c0e5f841-7383-f246-9280-c24ac640ae9a" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.090 [INFO][5536] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" iface="eth0" netns="/var/run/netns/cni-c0e5f841-7383-f246-9280-c24ac640ae9a" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.091 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.091 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.108 [INFO][5544] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" HandleID="k8s-pod-network.61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.108 [INFO][5544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.108 [INFO][5544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.117 [WARNING][5544] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" HandleID="k8s-pod-network.61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.117 [INFO][5544] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" HandleID="k8s-pod-network.61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.118 [INFO][5544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:50.122036 containerd[1715]: 2026-01-17 00:06:50.120 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:06:50.123579 containerd[1715]: time="2026-01-17T00:06:50.122678099Z" level=info msg="TearDown network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\" successfully" Jan 17 00:06:50.123579 containerd[1715]: time="2026-01-17T00:06:50.122708259Z" level=info msg="StopPodSandbox for \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\" returns successfully" Jan 17 00:06:50.123579 containerd[1715]: time="2026-01-17T00:06:50.123356299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7gm8,Uid:8214a0c3-a0f7-40b6-915d-08cea6de347e,Namespace:calico-system,Attempt:1,}" Jan 17 00:06:50.133898 systemd[1]: run-netns-cni\x2dc0e5f841\x2d7383\x2df246\x2d9280\x2dc24ac640ae9a.mount: Deactivated successfully. Jan 17 00:06:50.248068 systemd-networkd[1356]: cali36da9a9fc8c: Link UP Jan 17 00:06:50.249153 systemd-networkd[1356]: cali36da9a9fc8c: Gained carrier Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.183 [INFO][5551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0 csi-node-driver- calico-system 8214a0c3-a0f7-40b6-915d-08cea6de347e 1097 0 2026-01-17 00:06:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-f5e0a482e1 csi-node-driver-z7gm8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali36da9a9fc8c [] [] }} ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Namespace="calico-system" Pod="csi-node-driver-z7gm8" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.183 [INFO][5551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Namespace="calico-system" Pod="csi-node-driver-z7gm8" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.208 [INFO][5563] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" HandleID="k8s-pod-network.eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.208 [INFO][5563] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" HandleID="k8s-pod-network.eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f5e0a482e1", "pod":"csi-node-driver-z7gm8", "timestamp":"2026-01-17 00:06:50.20811865 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f5e0a482e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.208 [INFO][5563] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.208 [INFO][5563] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.208 [INFO][5563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f5e0a482e1' Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.217 [INFO][5563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.220 [INFO][5563] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.224 [INFO][5563] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.225 [INFO][5563] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.227 [INFO][5563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.227 [INFO][5563] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.229 [INFO][5563] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.235 [INFO][5563] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.242 [INFO][5563] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.136/26] block=192.168.38.128/26 handle="k8s-pod-network.eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.242 [INFO][5563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.136/26] handle="k8s-pod-network.eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" host="ci-4081.3.6-n-f5e0a482e1" Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.242 [INFO][5563] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:06:50.264329 containerd[1715]: 2026-01-17 00:06:50.242 [INFO][5563] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.136/26] IPv6=[] ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" HandleID="k8s-pod-network.eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.265890 containerd[1715]: 2026-01-17 00:06:50.246 [INFO][5551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Namespace="calico-system" Pod="csi-node-driver-z7gm8" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8214a0c3-a0f7-40b6-915d-08cea6de347e", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"", Pod:"csi-node-driver-z7gm8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36da9a9fc8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:50.265890 containerd[1715]: 2026-01-17 00:06:50.246 [INFO][5551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.136/32] ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Namespace="calico-system" Pod="csi-node-driver-z7gm8" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.265890 containerd[1715]: 2026-01-17 00:06:50.246 [INFO][5551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36da9a9fc8c ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Namespace="calico-system" Pod="csi-node-driver-z7gm8" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.265890 containerd[1715]: 2026-01-17 00:06:50.248 [INFO][5551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Namespace="calico-system" Pod="csi-node-driver-z7gm8" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.265890 containerd[1715]: 2026-01-17 00:06:50.248 [INFO][5551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Namespace="calico-system" Pod="csi-node-driver-z7gm8" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8214a0c3-a0f7-40b6-915d-08cea6de347e", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a", Pod:"csi-node-driver-z7gm8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36da9a9fc8c", MAC:"2e:ca:d8:91:b1:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:06:50.265890 containerd[1715]: 2026-01-17 00:06:50.261 [INFO][5551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a" Namespace="calico-system" Pod="csi-node-driver-z7gm8" WorkloadEndpoint="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:06:50.284562 containerd[1715]: time="2026-01-17T00:06:50.283927317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:50.284562 containerd[1715]: time="2026-01-17T00:06:50.284001317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:50.284562 containerd[1715]: time="2026-01-17T00:06:50.284023157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:50.284562 containerd[1715]: time="2026-01-17T00:06:50.284108197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:50.309406 systemd[1]: Started cri-containerd-eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a.scope - libcontainer container eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a. Jan 17 00:06:50.337879 containerd[1715]: time="2026-01-17T00:06:50.337831377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7gm8,Uid:8214a0c3-a0f7-40b6-915d-08cea6de347e,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a\"" Jan 17 00:06:50.340059 containerd[1715]: time="2026-01-17T00:06:50.339853138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:06:50.365058 kubelet[3272]: E0117 00:06:50.364834 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:06:50.368993 kubelet[3272]: E0117 00:06:50.368825 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:06:50.368993 kubelet[3272]: E0117 00:06:50.368922 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:06:50.581716 systemd-networkd[1356]: cali3f70a62a74b: Gained IPv6LL Jan 17 00:06:50.606554 containerd[1715]: time="2026-01-17T00:06:50.606464835Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:50.608699 containerd[1715]: time="2026-01-17T00:06:50.608663435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:06:50.608865 containerd[1715]: time="2026-01-17T00:06:50.608686875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:06:50.609140 kubelet[3272]: E0117 00:06:50.608990 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:06:50.609140 kubelet[3272]: E0117 00:06:50.609040 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:06:50.609601 kubelet[3272]: E0117 00:06:50.609510 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:50.613407 containerd[1715]: time="2026-01-17T00:06:50.612066157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:06:50.837683 systemd-networkd[1356]: calid7496ae7306: Gained IPv6LL Jan 17 00:06:50.857629 containerd[1715]: time="2026-01-17T00:06:50.857327046Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:50.860275 containerd[1715]: time="2026-01-17T00:06:50.860170127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:06:50.860385 containerd[1715]: time="2026-01-17T00:06:50.860209687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:06:50.860490 kubelet[3272]: E0117 00:06:50.860429 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:06:50.860549 kubelet[3272]: E0117 00:06:50.860501 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:06:50.860803 kubelet[3272]: E0117 00:06:50.860760 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:50.861920 kubelet[3272]: E0117 00:06:50.861875 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:51.134502 systemd[1]: run-containerd-runc-k8s.io-eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a-runc.zuM7FD.mount: Deactivated successfully. Jan 17 00:06:51.372224 kubelet[3272]: E0117 00:06:51.372156 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:06:51.373256 kubelet[3272]: E0117 00:06:51.373159 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:06:52.054692 systemd-networkd[1356]: cali36da9a9fc8c: Gained IPv6LL Jan 17 00:06:58.045041 containerd[1715]: time="2026-01-17T00:06:58.044912277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:06:58.299858 containerd[1715]: time="2026-01-17T00:06:58.299726651Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:58.304780 containerd[1715]: time="2026-01-17T00:06:58.304734933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:06:58.304856 containerd[1715]: time="2026-01-17T00:06:58.304837653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:06:58.305006 kubelet[3272]: E0117 00:06:58.304962 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:06:58.305285 kubelet[3272]: E0117 00:06:58.305017 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:06:58.305285 kubelet[3272]: E0117 00:06:58.305141 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzksz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vtx75_calico-system(e2e5377f-9c87-4d0a-b448-a7595a3af9ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:58.306422 kubelet[3272]: E0117 00:06:58.306384 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:06:59.045604 containerd[1715]: time="2026-01-17T00:06:59.044927208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:06:59.337548 containerd[1715]: time="2026-01-17T00:06:59.337419997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:59.339759 containerd[1715]: time="2026-01-17T00:06:59.339718557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:06:59.339840 containerd[1715]: time="2026-01-17T00:06:59.339821717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:06:59.340177 kubelet[3272]: E0117 00:06:59.339962 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:06:59.340177 kubelet[3272]: E0117 00:06:59.340015 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:06:59.340177 kubelet[3272]: E0117 00:06:59.340130 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bf2168dfdbe84860b95b751791854241,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t77t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b946cd94f-7mkrh_calico-system(8057ab60-fa20-42e9-a7e5-844713387641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:59.342988 containerd[1715]: time="2026-01-17T00:06:59.342953959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:06:59.597126 containerd[1715]: time="2026-01-17T00:06:59.597007293Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:59.599601 containerd[1715]: time="2026-01-17T00:06:59.599556734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:06:59.599692 containerd[1715]: time="2026-01-17T00:06:59.599661654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:06:59.599836 kubelet[3272]: E0117 00:06:59.599800 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:06:59.599882 kubelet[3272]: E0117 00:06:59.599848 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:06:59.599990 kubelet[3272]: E0117 00:06:59.599951 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t77t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b946cd94f-7mkrh_calico-system(8057ab60-fa20-42e9-a7e5-844713387641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:59.601083 kubelet[3272]: E0117 00:06:59.601045 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:07:04.043786 containerd[1715]: time="2026-01-17T00:07:04.043739502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:07:04.319700 containerd[1715]: time="2026-01-17T00:07:04.319566080Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:04.322463 containerd[1715]: time="2026-01-17T00:07:04.322349201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:07:04.322463 containerd[1715]: time="2026-01-17T00:07:04.322422521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:07:04.322598 kubelet[3272]: E0117 00:07:04.322565 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:04.322925 kubelet[3272]: E0117 00:07:04.322613 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:04.322925 kubelet[3272]: E0117 00:07:04.322729 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5ndq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf786874-gphpw_calico-apiserver(6e9ff54d-9f3a-4f62-92e0-56921b0f16ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:04.324240 kubelet[3272]: E0117 00:07:04.324203 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:07:05.045583 containerd[1715]: time="2026-01-17T00:07:05.045157137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:07:05.280795 containerd[1715]: time="2026-01-17T00:07:05.280597540Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:05.283140 containerd[1715]: time="2026-01-17T00:07:05.283058701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:07:05.283222 containerd[1715]: time="2026-01-17T00:07:05.283138261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:07:05.283298 kubelet[3272]: E0117 00:07:05.283259 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:07:05.283356 kubelet[3272]: E0117 00:07:05.283308 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:07:05.283461 kubelet[3272]: E0117 00:07:05.283420 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:05.285961 containerd[1715]: time="2026-01-17T00:07:05.285856462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:07:05.552487 containerd[1715]: time="2026-01-17T00:07:05.552378157Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:05.555745 containerd[1715]: time="2026-01-17T00:07:05.555567518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:07:05.555745 containerd[1715]: time="2026-01-17T00:07:05.555679958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:07:05.556647 kubelet[3272]: E0117 00:07:05.556599 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:07:05.557000 kubelet[3272]: E0117 00:07:05.556665 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:07:05.557000 kubelet[3272]: E0117 00:07:05.556789 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:05.558665 kubelet[3272]: E0117 00:07:05.558614 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:07:06.045337 containerd[1715]: time="2026-01-17T00:07:06.045289612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:07:06.303360 containerd[1715]: time="2026-01-17T00:07:06.302708343Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:06.305727 containerd[1715]: time="2026-01-17T00:07:06.305579064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:07:06.306163 containerd[1715]: time="2026-01-17T00:07:06.305677824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:07:06.306212 kubelet[3272]: E0117 00:07:06.306076 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:06.306212 kubelet[3272]: E0117 00:07:06.306131 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:06.306768 containerd[1715]: time="2026-01-17T00:07:06.306737985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:07:06.306973 kubelet[3272]: E0117 00:07:06.306882 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmd54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf786874-qhq5d_calico-apiserver(9340ab9f-05b7-44f8-b60d-bcae76bd89d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:06.310885 kubelet[3272]: E0117 00:07:06.308384 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:07:06.566180 containerd[1715]: time="2026-01-17T00:07:06.566013437Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:06.568562 containerd[1715]: time="2026-01-17T00:07:06.568494077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:07:06.568746 containerd[1715]: time="2026-01-17T00:07:06.568605157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:07:06.568785 kubelet[3272]: E0117 00:07:06.568727 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:07:06.568785 kubelet[3272]: E0117 00:07:06.568775 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:07:06.569918 kubelet[3272]: E0117 00:07:06.568910 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r22zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68ddb45bfc-grgqw_calico-system(e747a046-268c-4a51-81e2-3f445b48b5cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:06.570067 kubelet[3272]: E0117 00:07:06.570034 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:07:11.047419 kubelet[3272]: E0117 00:07:11.047353 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:07:12.044733 kubelet[3272]: E0117 00:07:12.044419 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:07:15.044447 kubelet[3272]: E0117 00:07:15.044394 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:07:17.049123 kubelet[3272]: E0117 00:07:17.049054 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:07:19.045628 kubelet[3272]: E0117 00:07:19.044801 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:07:22.046551 kubelet[3272]: E0117 00:07:22.044722 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:07:23.047105 containerd[1715]: time="2026-01-17T00:07:23.047065200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:07:23.485418 containerd[1715]: time="2026-01-17T00:07:23.485377373Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:23.488674 containerd[1715]: time="2026-01-17T00:07:23.488574614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:07:23.488891 kubelet[3272]: E0117 00:07:23.488856 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:07:23.489222 kubelet[3272]: E0117 00:07:23.488902 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:07:23.489222 kubelet[3272]: E0117 00:07:23.489005 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bf2168dfdbe84860b95b751791854241,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t77t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b946cd94f-7mkrh_calico-system(8057ab60-fa20-42e9-a7e5-844713387641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:23.490307 containerd[1715]: time="2026-01-17T00:07:23.488644254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:07:23.493664 containerd[1715]: time="2026-01-17T00:07:23.493628296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:07:23.778426 containerd[1715]: time="2026-01-17T00:07:23.777783928Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:23.780224 containerd[1715]: time="2026-01-17T00:07:23.780128689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:07:23.780224 containerd[1715]: time="2026-01-17T00:07:23.780158409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:07:23.780478 kubelet[3272]: E0117 00:07:23.780432 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:07:23.781045 kubelet[3272]: E0117 00:07:23.780486 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:07:23.781045 kubelet[3272]: E0117 00:07:23.780616 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t77t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b946cd94f-7mkrh_calico-system(8057ab60-fa20-42e9-a7e5-844713387641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:23.782326 kubelet[3272]: E0117 00:07:23.782281 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:07:27.046607 containerd[1715]: time="2026-01-17T00:07:27.046396295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:07:27.307099 containerd[1715]: time="2026-01-17T00:07:27.306830957Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:27.309424 containerd[1715]: time="2026-01-17T00:07:27.309322598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:07:27.309424 containerd[1715]: time="2026-01-17T00:07:27.309401518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:07:27.310716 kubelet[3272]: E0117 00:07:27.309672 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:07:27.310716 kubelet[3272]: E0117 00:07:27.309722 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:07:27.310716 kubelet[3272]: E0117 00:07:27.309915 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzksz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vtx75_calico-system(e2e5377f-9c87-4d0a-b448-a7595a3af9ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:27.311127 containerd[1715]: time="2026-01-17T00:07:27.310487279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:07:27.311319 kubelet[3272]: E0117 00:07:27.311216 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:07:27.610121 containerd[1715]: time="2026-01-17T00:07:27.609579996Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:27.612441 containerd[1715]: time="2026-01-17T00:07:27.612312718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:07:27.612441 containerd[1715]: time="2026-01-17T00:07:27.612412398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:07:27.613931 kubelet[3272]: E0117 00:07:27.613417 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:27.613931 kubelet[3272]: E0117 00:07:27.613461 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:27.613931 kubelet[3272]: E0117 00:07:27.613593 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5ndq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf786874-gphpw_calico-apiserver(6e9ff54d-9f3a-4f62-92e0-56921b0f16ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:27.615013 kubelet[3272]: E0117 00:07:27.614987 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:07:30.044942 containerd[1715]: time="2026-01-17T00:07:30.044880818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:07:30.290431 containerd[1715]: time="2026-01-17T00:07:30.290385037Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:30.293630 containerd[1715]: time="2026-01-17T00:07:30.293589559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:07:30.293719 containerd[1715]: time="2026-01-17T00:07:30.293688199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:07:30.293856 kubelet[3272]: E0117 00:07:30.293819 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:07:30.295125 kubelet[3272]: E0117 00:07:30.293872 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:07:30.295125 kubelet[3272]: E0117 00:07:30.293989 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:30.298022 containerd[1715]: time="2026-01-17T00:07:30.297988280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:07:30.569436 containerd[1715]: time="2026-01-17T00:07:30.568804870Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:30.571550 containerd[1715]: time="2026-01-17T00:07:30.571423831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:07:30.571550 containerd[1715]: time="2026-01-17T00:07:30.571505111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:07:30.571985 kubelet[3272]: E0117 00:07:30.571664 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:07:30.571985 kubelet[3272]: E0117 00:07:30.571735 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:07:30.571985 kubelet[3272]: E0117 00:07:30.571866 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:30.573232 kubelet[3272]: E0117 00:07:30.573190 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:07:31.048129 containerd[1715]: time="2026-01-17T00:07:31.047905744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:07:31.312251 containerd[1715]: time="2026-01-17T00:07:31.311991411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:31.314982 containerd[1715]: time="2026-01-17T00:07:31.314480212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:07:31.314982 containerd[1715]: time="2026-01-17T00:07:31.314562092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:07:31.315487 kubelet[3272]: E0117 00:07:31.315232 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:07:31.315487 kubelet[3272]: E0117 00:07:31.315295 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:07:31.315487 kubelet[3272]: E0117 00:07:31.315428 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r22zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68ddb45bfc-grgqw_calico-system(e747a046-268c-4a51-81e2-3f445b48b5cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:31.317555 kubelet[3272]: E0117 00:07:31.316987 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:07:34.046858 containerd[1715]: time="2026-01-17T00:07:34.046817277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:07:34.293414 containerd[1715]: time="2026-01-17T00:07:34.293361536Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:34.295599 containerd[1715]: time="2026-01-17T00:07:34.295552057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:07:34.295681 containerd[1715]: time="2026-01-17T00:07:34.295562777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:07:34.295837 kubelet[3272]: E0117 00:07:34.295795 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:34.296111 kubelet[3272]: E0117 00:07:34.295849 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:34.296111 kubelet[3272]: E0117 00:07:34.295969 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmd54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf786874-qhq5d_calico-apiserver(9340ab9f-05b7-44f8-b60d-bcae76bd89d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:34.297887 kubelet[3272]: E0117 00:07:34.297634 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:07:36.044605 kubelet[3272]: E0117 00:07:36.044408 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:07:39.046114 kubelet[3272]: E0117 00:07:39.045209 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:07:39.046479 kubelet[3272]: E0117 00:07:39.046447 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:07:43.402673 containerd[1715]: time="2026-01-17T00:07:43.402609504Z" level=info msg="StopPodSandbox for \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\"" Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.446 [WARNING][5711] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0", GenerateName:"calico-apiserver-77bf786874-", Namespace:"calico-apiserver", SelfLink:"", UID:"9340ab9f-05b7-44f8-b60d-bcae76bd89d3", ResourceVersion:"1291", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf786874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0", Pod:"calico-apiserver-77bf786874-qhq5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57c0a4a80e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.447 [INFO][5711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.447 [INFO][5711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" iface="eth0" netns="" Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.447 [INFO][5711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.447 [INFO][5711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.476 [INFO][5718] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" HandleID="k8s-pod-network.c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.477 [INFO][5718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.477 [INFO][5718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.492 [WARNING][5718] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" HandleID="k8s-pod-network.c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.492 [INFO][5718] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" HandleID="k8s-pod-network.c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.493 [INFO][5718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:43.500690 containerd[1715]: 2026-01-17 00:07:43.497 [INFO][5711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:07:43.501825 containerd[1715]: time="2026-01-17T00:07:43.500743063Z" level=info msg="TearDown network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\" successfully" Jan 17 00:07:43.501825 containerd[1715]: time="2026-01-17T00:07:43.500778023Z" level=info msg="StopPodSandbox for \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\" returns successfully" Jan 17 00:07:43.501825 containerd[1715]: time="2026-01-17T00:07:43.501232103Z" level=info msg="RemovePodSandbox for \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\"" Jan 17 00:07:43.501825 containerd[1715]: time="2026-01-17T00:07:43.501261623Z" level=info msg="Forcibly stopping sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\"" Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.573 [WARNING][5732] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0", GenerateName:"calico-apiserver-77bf786874-", Namespace:"calico-apiserver", SelfLink:"", UID:"9340ab9f-05b7-44f8-b60d-bcae76bd89d3", ResourceVersion:"1291", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf786874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"aee76c7ced610ee37a579850f783e833e2431c2ad97f63a309e2294ee7d1e7d0", Pod:"calico-apiserver-77bf786874-qhq5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57c0a4a80e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.573 [INFO][5732] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.574 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" iface="eth0" netns="" Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.574 [INFO][5732] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.574 [INFO][5732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.602 [INFO][5740] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" HandleID="k8s-pod-network.c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.602 [INFO][5740] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.602 [INFO][5740] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.629 [WARNING][5740] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" HandleID="k8s-pod-network.c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.629 [INFO][5740] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" HandleID="k8s-pod-network.c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--qhq5d-eth0" Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.631 [INFO][5740] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:43.635618 containerd[1715]: 2026-01-17 00:07:43.633 [INFO][5732] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92" Jan 17 00:07:43.636456 containerd[1715]: time="2026-01-17T00:07:43.636069517Z" level=info msg="TearDown network for sandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\" successfully" Jan 17 00:07:43.679346 containerd[1715]: time="2026-01-17T00:07:43.678435814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:07:43.679684 containerd[1715]: time="2026-01-17T00:07:43.679475855Z" level=info msg="RemovePodSandbox \"c4f49648230d0ca40d5e5a105ff51e6108de8b691b8f11fb98f9e9e7d653af92\" returns successfully" Jan 17 00:07:43.680144 containerd[1715]: time="2026-01-17T00:07:43.680119535Z" level=info msg="StopPodSandbox for \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\"" Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.716 [WARNING][5754] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"616526de-5a58-4998-9f29-2aa2e02e1a8e", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f", Pod:"coredns-674b8bbfcf-pgmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43e192590d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.717 [INFO][5754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.717 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" iface="eth0" netns="" Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.717 [INFO][5754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.717 [INFO][5754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.736 [INFO][5761] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" HandleID="k8s-pod-network.4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.736 [INFO][5761] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.737 [INFO][5761] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.745 [WARNING][5761] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" HandleID="k8s-pod-network.4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.745 [INFO][5761] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" HandleID="k8s-pod-network.4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.746 [INFO][5761] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:43.750560 containerd[1715]: 2026-01-17 00:07:43.748 [INFO][5754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:07:43.750560 containerd[1715]: time="2026-01-17T00:07:43.750433123Z" level=info msg="TearDown network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\" successfully" Jan 17 00:07:43.750560 containerd[1715]: time="2026-01-17T00:07:43.750457163Z" level=info msg="StopPodSandbox for \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\" returns successfully" Jan 17 00:07:43.751447 containerd[1715]: time="2026-01-17T00:07:43.751411883Z" level=info msg="RemovePodSandbox for \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\"" Jan 17 00:07:43.751512 containerd[1715]: time="2026-01-17T00:07:43.751449723Z" level=info msg="Forcibly stopping sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\"" Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.795 [WARNING][5776] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"616526de-5a58-4998-9f29-2aa2e02e1a8e", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"2471d0c32e8aaaa0b5fc39b5c14e72633d1ec5dfa5384dffa53a54ea76ef037f", Pod:"coredns-674b8bbfcf-pgmjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43e192590d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.795 [INFO][5776] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.795 [INFO][5776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" iface="eth0" netns="" Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.795 [INFO][5776] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.795 [INFO][5776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.816 [INFO][5783] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" HandleID="k8s-pod-network.4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.816 [INFO][5783] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.816 [INFO][5783] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.828 [WARNING][5783] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" HandleID="k8s-pod-network.4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.828 [INFO][5783] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" HandleID="k8s-pod-network.4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--pgmjl-eth0" Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.830 [INFO][5783] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:43.836745 containerd[1715]: 2026-01-17 00:07:43.833 [INFO][5776] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498" Jan 17 00:07:43.836745 containerd[1715]: time="2026-01-17T00:07:43.836366597Z" level=info msg="TearDown network for sandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\" successfully" Jan 17 00:07:43.979182 containerd[1715]: time="2026-01-17T00:07:43.978991934Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:07:43.979182 containerd[1715]: time="2026-01-17T00:07:43.979064294Z" level=info msg="RemovePodSandbox \"4009255931225c2c378900679e17fe1de8a7414469d3a2169f83a23135ea3498\" returns successfully" Jan 17 00:07:43.979821 containerd[1715]: time="2026-01-17T00:07:43.979568294Z" level=info msg="StopPodSandbox for \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\"" Jan 17 00:07:44.049560 kubelet[3272]: E0117 00:07:44.049497 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.027 [WARNING][5797] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8214a0c3-a0f7-40b6-915d-08cea6de347e", ResourceVersion:"1277", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a", Pod:"csi-node-driver-z7gm8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36da9a9fc8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.027 [INFO][5797] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.027 [INFO][5797] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" iface="eth0" netns="" Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.027 [INFO][5797] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.027 [INFO][5797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.062 [INFO][5804] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" HandleID="k8s-pod-network.61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.062 [INFO][5804] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.062 [INFO][5804] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.083 [WARNING][5804] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" HandleID="k8s-pod-network.61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.083 [INFO][5804] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" HandleID="k8s-pod-network.61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.091 [INFO][5804] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:44.096182 containerd[1715]: 2026-01-17 00:07:44.093 [INFO][5797] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:07:44.096182 containerd[1715]: time="2026-01-17T00:07:44.096067660Z" level=info msg="TearDown network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\" successfully" Jan 17 00:07:44.096182 containerd[1715]: time="2026-01-17T00:07:44.096092580Z" level=info msg="StopPodSandbox for \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\" returns successfully" Jan 17 00:07:44.097885 containerd[1715]: time="2026-01-17T00:07:44.097831741Z" level=info msg="RemovePodSandbox for \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\"" Jan 17 00:07:44.097885 containerd[1715]: time="2026-01-17T00:07:44.097864821Z" level=info msg="Forcibly stopping sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\"" Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.147 [WARNING][5818] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8214a0c3-a0f7-40b6-915d-08cea6de347e", ResourceVersion:"1325", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"eb68bef2d36f31550f1045d0b43dca542adb023b522217db3504d8ab0a5a2d8a", Pod:"csi-node-driver-z7gm8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36da9a9fc8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.147 [INFO][5818] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.147 [INFO][5818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" iface="eth0" netns="" Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.147 [INFO][5818] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.147 [INFO][5818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.175 [INFO][5825] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" HandleID="k8s-pod-network.61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.175 [INFO][5825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.175 [INFO][5825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.184 [WARNING][5825] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" HandleID="k8s-pod-network.61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.184 [INFO][5825] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" HandleID="k8s-pod-network.61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-csi--node--driver--z7gm8-eth0" Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.185 [INFO][5825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:44.190698 containerd[1715]: 2026-01-17 00:07:44.188 [INFO][5818] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c" Jan 17 00:07:44.192436 containerd[1715]: time="2026-01-17T00:07:44.190786977Z" level=info msg="TearDown network for sandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\" successfully" Jan 17 00:07:44.582504 containerd[1715]: time="2026-01-17T00:07:44.582416811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:07:44.582992 containerd[1715]: time="2026-01-17T00:07:44.582894931Z" level=info msg="RemovePodSandbox \"61f4c60be8b49c2da715eb48ae06a812157e265fdae994251c94ecccb37e160c\" returns successfully" Jan 17 00:07:44.583545 containerd[1715]: time="2026-01-17T00:07:44.583277691Z" level=info msg="StopPodSandbox for \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\"" Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.627 [WARNING][5840] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0", GenerateName:"calico-kube-controllers-68ddb45bfc-", Namespace:"calico-system", SelfLink:"", UID:"e747a046-268c-4a51-81e2-3f445b48b5cd", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ddb45bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c", Pod:"calico-kube-controllers-68ddb45bfc-grgqw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f70a62a74b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.627 [INFO][5840] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.627 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" iface="eth0" netns="" Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.627 [INFO][5840] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.627 [INFO][5840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.647 [INFO][5847] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" HandleID="k8s-pod-network.a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.647 [INFO][5847] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.647 [INFO][5847] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.655 [WARNING][5847] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" HandleID="k8s-pod-network.a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.655 [INFO][5847] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" HandleID="k8s-pod-network.a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.657 [INFO][5847] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:44.660519 containerd[1715]: 2026-01-17 00:07:44.659 [INFO][5840] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:07:44.661286 containerd[1715]: time="2026-01-17T00:07:44.660991202Z" level=info msg="TearDown network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\" successfully" Jan 17 00:07:44.661286 containerd[1715]: time="2026-01-17T00:07:44.661033042Z" level=info msg="StopPodSandbox for \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\" returns successfully" Jan 17 00:07:44.661821 containerd[1715]: time="2026-01-17T00:07:44.661474402Z" level=info msg="RemovePodSandbox for \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\"" Jan 17 00:07:44.661821 containerd[1715]: time="2026-01-17T00:07:44.661501642Z" level=info msg="Forcibly stopping sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\"" Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.700 [WARNING][5861] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0", GenerateName:"calico-kube-controllers-68ddb45bfc-", Namespace:"calico-system", SelfLink:"", UID:"e747a046-268c-4a51-81e2-3f445b48b5cd", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68ddb45bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"98d73ab266c6496d0a4496efef315d221dd32c130b0ddce71c87dfd0aaa93c0c", Pod:"calico-kube-controllers-68ddb45bfc-grgqw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f70a62a74b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.700 [INFO][5861] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.700 [INFO][5861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" iface="eth0" netns="" Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.700 [INFO][5861] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.700 [INFO][5861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.717 [INFO][5868] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" HandleID="k8s-pod-network.a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.717 [INFO][5868] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.718 [INFO][5868] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.726 [WARNING][5868] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" HandleID="k8s-pod-network.a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.726 [INFO][5868] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" HandleID="k8s-pod-network.a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--kube--controllers--68ddb45bfc--grgqw-eth0" Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.727 [INFO][5868] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:44.732657 containerd[1715]: 2026-01-17 00:07:44.729 [INFO][5861] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7" Jan 17 00:07:44.734492 containerd[1715]: time="2026-01-17T00:07:44.733234790Z" level=info msg="TearDown network for sandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\" successfully" Jan 17 00:07:44.879489 containerd[1715]: time="2026-01-17T00:07:44.879377047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:07:44.879672 containerd[1715]: time="2026-01-17T00:07:44.879653127Z" level=info msg="RemovePodSandbox \"a36d958289ae5a0d5ce5f09b4bf15028cf0595da92e1e3bc0bb2324de49e6da7\" returns successfully" Jan 17 00:07:44.880188 containerd[1715]: time="2026-01-17T00:07:44.880161688Z" level=info msg="StopPodSandbox for \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\"" Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.916 [WARNING][5882] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e2e5377f-9c87-4d0a-b448-a7595a3af9ad", ResourceVersion:"1311", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d", Pod:"goldmane-666569f655-vtx75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia41bb928959", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.917 [INFO][5882] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.917 [INFO][5882] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" iface="eth0" netns="" Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.917 [INFO][5882] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.917 [INFO][5882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.933 [INFO][5889] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" HandleID="k8s-pod-network.5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.933 [INFO][5889] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.933 [INFO][5889] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.942 [WARNING][5889] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" HandleID="k8s-pod-network.5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.942 [INFO][5889] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" HandleID="k8s-pod-network.5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.943 [INFO][5889] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:44.947719 containerd[1715]: 2026-01-17 00:07:44.945 [INFO][5882] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:07:44.948836 containerd[1715]: time="2026-01-17T00:07:44.948148714Z" level=info msg="TearDown network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\" successfully" Jan 17 00:07:44.948836 containerd[1715]: time="2026-01-17T00:07:44.948180594Z" level=info msg="StopPodSandbox for \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\" returns successfully" Jan 17 00:07:44.948836 containerd[1715]: time="2026-01-17T00:07:44.948719435Z" level=info msg="RemovePodSandbox for \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\"" Jan 17 00:07:44.948836 containerd[1715]: time="2026-01-17T00:07:44.948747835Z" level=info msg="Forcibly stopping sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\"" Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:44.985 [WARNING][5904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e2e5377f-9c87-4d0a-b448-a7595a3af9ad", ResourceVersion:"1311", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"4838523a9e779312bf40bc3f9e758c5519ed21a7301a71f56c8d296f542d3c9d", Pod:"goldmane-666569f655-vtx75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia41bb928959", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:44.985 [INFO][5904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:44.985 [INFO][5904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" iface="eth0" netns="" Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:44.985 [INFO][5904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:44.985 [INFO][5904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:45.004 [INFO][5911] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" HandleID="k8s-pod-network.5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:45.004 [INFO][5911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:45.004 [INFO][5911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:45.014 [WARNING][5911] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" HandleID="k8s-pod-network.5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:45.015 [INFO][5911] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" HandleID="k8s-pod-network.5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-goldmane--666569f655--vtx75-eth0" Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:45.016 [INFO][5911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:45.020560 containerd[1715]: 2026-01-17 00:07:45.017 [INFO][5904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f" Jan 17 00:07:45.020560 containerd[1715]: time="2026-01-17T00:07:45.020291183Z" level=info msg="TearDown network for sandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\" successfully" Jan 17 00:07:45.070149 containerd[1715]: time="2026-01-17T00:07:45.069963082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:07:45.070149 containerd[1715]: time="2026-01-17T00:07:45.070034962Z" level=info msg="RemovePodSandbox \"5c92b847518548db2f9680406ff8805b5889ba4610abf78b6f566a94a49b5e2f\" returns successfully" Jan 17 00:07:45.072152 containerd[1715]: time="2026-01-17T00:07:45.071884323Z" level=info msg="StopPodSandbox for \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\"" Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.144 [WARNING][5926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0", GenerateName:"calico-apiserver-77bf786874-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e9ff54d-9f3a-4f62-92e0-56921b0f16ea", ResourceVersion:"1313", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf786874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c", Pod:"calico-apiserver-77bf786874-gphpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7496ae7306", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.146 [INFO][5926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.146 [INFO][5926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" iface="eth0" netns="" Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.146 [INFO][5926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.146 [INFO][5926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.176 [INFO][5933] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" HandleID="k8s-pod-network.18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.178 [INFO][5933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.178 [INFO][5933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.201 [WARNING][5933] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" HandleID="k8s-pod-network.18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.201 [INFO][5933] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" HandleID="k8s-pod-network.18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.203 [INFO][5933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:45.207047 containerd[1715]: 2026-01-17 00:07:45.205 [INFO][5926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:07:45.207047 containerd[1715]: time="2026-01-17T00:07:45.206856896Z" level=info msg="TearDown network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\" successfully" Jan 17 00:07:45.207047 containerd[1715]: time="2026-01-17T00:07:45.206880616Z" level=info msg="StopPodSandbox for \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\" returns successfully" Jan 17 00:07:45.209449 containerd[1715]: time="2026-01-17T00:07:45.208453057Z" level=info msg="RemovePodSandbox for \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\"" Jan 17 00:07:45.209449 containerd[1715]: time="2026-01-17T00:07:45.208490177Z" level=info msg="Forcibly stopping sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\"" Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.252 [WARNING][5947] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0", GenerateName:"calico-apiserver-77bf786874-", Namespace:"calico-apiserver", SelfLink:"", UID:"6e9ff54d-9f3a-4f62-92e0-56921b0f16ea", ResourceVersion:"1313", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bf786874", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"a124947802791855812284068aa6ef0afe5e9019913b8114a186121833f5b68c", Pod:"calico-apiserver-77bf786874-gphpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7496ae7306", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.252 [INFO][5947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.252 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" iface="eth0" netns="" Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.252 [INFO][5947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.252 [INFO][5947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.274 [INFO][5955] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" HandleID="k8s-pod-network.18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.275 [INFO][5955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.275 [INFO][5955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.285 [WARNING][5955] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" HandleID="k8s-pod-network.18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.285 [INFO][5955] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" HandleID="k8s-pod-network.18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-calico--apiserver--77bf786874--gphpw-eth0" Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.286 [INFO][5955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:45.290642 containerd[1715]: 2026-01-17 00:07:45.288 [INFO][5947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e" Jan 17 00:07:45.290642 containerd[1715]: time="2026-01-17T00:07:45.290592169Z" level=info msg="TearDown network for sandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\" successfully" Jan 17 00:07:45.325380 containerd[1715]: time="2026-01-17T00:07:45.324277062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:07:45.325380 containerd[1715]: time="2026-01-17T00:07:45.324351182Z" level=info msg="RemovePodSandbox \"18c03e218043e0eced57e791c9fd7ee738d53ac42e91a7c29d982911214e6b4e\" returns successfully" Jan 17 00:07:45.326488 containerd[1715]: time="2026-01-17T00:07:45.326465423Z" level=info msg="StopPodSandbox for \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\"" Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.365 [WARNING][5971] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acb3da73-aad2-4399-b6f1-7f3c1a0d99c5", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86", Pod:"coredns-674b8bbfcf-n869n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fb3cbee3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.366 [INFO][5971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.366 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" iface="eth0" netns="" Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.366 [INFO][5971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.366 [INFO][5971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.393 [INFO][5978] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" HandleID="k8s-pod-network.af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.394 [INFO][5978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.394 [INFO][5978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.410 [WARNING][5978] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" HandleID="k8s-pod-network.af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.410 [INFO][5978] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" HandleID="k8s-pod-network.af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.413 [INFO][5978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:45.420284 containerd[1715]: 2026-01-17 00:07:45.418 [INFO][5971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:07:45.420786 containerd[1715]: time="2026-01-17T00:07:45.420317020Z" level=info msg="TearDown network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\" successfully" Jan 17 00:07:45.420786 containerd[1715]: time="2026-01-17T00:07:45.420343460Z" level=info msg="StopPodSandbox for \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\" returns successfully" Jan 17 00:07:45.422682 containerd[1715]: time="2026-01-17T00:07:45.422362620Z" level=info msg="RemovePodSandbox for \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\"" Jan 17 00:07:45.422682 containerd[1715]: time="2026-01-17T00:07:45.422398260Z" level=info msg="Forcibly stopping sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\"" Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.457 [WARNING][5993] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acb3da73-aad2-4399-b6f1-7f3c1a0d99c5", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f5e0a482e1", ContainerID:"4e86e1cfe88ff68b1c407742aeb3e30bbaebc4b34453f9770eae36583bf11d86", Pod:"coredns-674b8bbfcf-n869n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fb3cbee3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.458 [INFO][5993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.458 [INFO][5993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" iface="eth0" netns="" Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.458 [INFO][5993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.458 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.494 [INFO][6000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" HandleID="k8s-pod-network.af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.495 [INFO][6000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.495 [INFO][6000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.507 [WARNING][6000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" HandleID="k8s-pod-network.af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.508 [INFO][6000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" HandleID="k8s-pod-network.af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Workload="ci--4081.3.6--n--f5e0a482e1-k8s-coredns--674b8bbfcf--n869n-eth0" Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.509 [INFO][6000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:45.515012 containerd[1715]: 2026-01-17 00:07:45.511 [INFO][5993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd" Jan 17 00:07:45.515012 containerd[1715]: time="2026-01-17T00:07:45.514923337Z" level=info msg="TearDown network for sandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\" successfully" Jan 17 00:07:46.044993 kubelet[3272]: E0117 00:07:46.044589 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:07:47.474365 containerd[1715]: time="2026-01-17T00:07:47.474300826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:07:47.474365 containerd[1715]: time="2026-01-17T00:07:47.474368306Z" level=info msg="RemovePodSandbox \"af5c112907717e4aa0a2f5d0fa8237470986dc89aaf38771ef652d0624665cdd\" returns successfully" Jan 17 00:07:48.045073 kubelet[3272]: E0117 00:07:48.043658 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:07:51.049641 kubelet[3272]: E0117 00:07:51.049512 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:07:54.045328 kubelet[3272]: E0117 00:07:54.043785 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:07:54.045328 kubelet[3272]: E0117 00:07:54.043900 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:07:56.047385 kubelet[3272]: E0117 00:07:56.047339 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:07:58.591816 systemd[1]: Started sshd@7-10.200.20.17:22-10.200.16.10:41920.service - OpenSSH per-connection server daemon (10.200.16.10:41920). Jan 17 00:07:59.048443 sshd[6014]: Accepted publickey for core from 10.200.16.10 port 41920 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:07:59.050907 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:07:59.055209 systemd-logind[1696]: New session 10 of user core. Jan 17 00:07:59.062677 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:07:59.470291 sshd[6014]: pam_unix(sshd:session): session closed for user core Jan 17 00:07:59.473991 systemd-logind[1696]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:07:59.475237 systemd[1]: sshd@7-10.200.20.17:22-10.200.16.10:41920.service: Deactivated successfully. Jan 17 00:07:59.479744 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:07:59.481698 systemd-logind[1696]: Removed session 10. Jan 17 00:08:01.065755 kubelet[3272]: E0117 00:08:01.065698 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:08:02.044893 kubelet[3272]: E0117 00:08:02.044564 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:08:04.044491 containerd[1715]: time="2026-01-17T00:08:04.044454727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:08:04.308957 containerd[1715]: time="2026-01-17T00:08:04.308690956Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:04.311840 containerd[1715]: time="2026-01-17T00:08:04.311736637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:08:04.311840 containerd[1715]: time="2026-01-17T00:08:04.311809077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:08:04.311979 kubelet[3272]: E0117 00:08:04.311941 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:08:04.312228 kubelet[3272]: E0117 00:08:04.311984 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:08:04.312228 kubelet[3272]: E0117 00:08:04.312097 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bf2168dfdbe84860b95b751791854241,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t77t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b946cd94f-7mkrh_calico-system(8057ab60-fa20-42e9-a7e5-844713387641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:04.314280 containerd[1715]: time="2026-01-17T00:08:04.314210878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:08:04.544004 containerd[1715]: time="2026-01-17T00:08:04.543827613Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:04.546636 containerd[1715]: time="2026-01-17T00:08:04.546533094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:08:04.546636 containerd[1715]: time="2026-01-17T00:08:04.546590814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:08:04.548015 kubelet[3272]: E0117 00:08:04.546901 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:08:04.548015 kubelet[3272]: E0117 00:08:04.546954 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:08:04.548015 kubelet[3272]: E0117 00:08:04.547062 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t77t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b946cd94f-7mkrh_calico-system(8057ab60-fa20-42e9-a7e5-844713387641): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:04.548444 kubelet[3272]: E0117 00:08:04.548416 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:08:04.566866 systemd[1]: Started sshd@8-10.200.20.17:22-10.200.16.10:40608.service - OpenSSH per-connection server daemon (10.200.16.10:40608). Jan 17 00:08:05.054800 sshd[6034]: Accepted publickey for core from 10.200.16.10 port 40608 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:05.055916 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:05.059879 systemd-logind[1696]: New session 11 of user core. Jan 17 00:08:05.062884 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:08:05.485062 sshd[6034]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:05.488331 systemd[1]: sshd@8-10.200.20.17:22-10.200.16.10:40608.service: Deactivated successfully. Jan 17 00:08:05.490287 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:08:05.494609 systemd-logind[1696]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:08:05.495923 systemd-logind[1696]: Removed session 11. Jan 17 00:08:06.043443 kubelet[3272]: E0117 00:08:06.043404 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:08:07.046846 kubelet[3272]: E0117 00:08:07.046781 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:08:08.045804 containerd[1715]: time="2026-01-17T00:08:08.045643499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:08:08.274029 containerd[1715]: time="2026-01-17T00:08:08.273981870Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:08.276488 containerd[1715]: time="2026-01-17T00:08:08.276443071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:08:08.276561 containerd[1715]: time="2026-01-17T00:08:08.276549071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:08:08.277001 kubelet[3272]: E0117 00:08:08.276733 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:08.277001 kubelet[3272]: E0117 00:08:08.276816 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:08.277001 kubelet[3272]: E0117 00:08:08.276943 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5ndq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf786874-gphpw_calico-apiserver(6e9ff54d-9f3a-4f62-92e0-56921b0f16ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:08.278591 kubelet[3272]: E0117 00:08:08.278540 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:08:10.573823 systemd[1]: Started sshd@9-10.200.20.17:22-10.200.16.10:37692.service - OpenSSH per-connection server daemon (10.200.16.10:37692). Jan 17 00:08:11.076888 sshd[6050]: Accepted publickey for core from 10.200.16.10 port 37692 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:11.078495 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:11.085608 systemd-logind[1696]: New session 12 of user core. Jan 17 00:08:11.088681 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:08:11.521809 sshd[6050]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:11.525331 systemd[1]: sshd@9-10.200.20.17:22-10.200.16.10:37692.service: Deactivated successfully. Jan 17 00:08:11.528321 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:08:11.529166 systemd-logind[1696]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:08:11.532215 systemd-logind[1696]: Removed session 12. Jan 17 00:08:11.626369 systemd[1]: Started sshd@10-10.200.20.17:22-10.200.16.10:37696.service - OpenSSH per-connection server daemon (10.200.16.10:37696). Jan 17 00:08:12.046617 containerd[1715]: time="2026-01-17T00:08:12.046565293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:08:12.114897 sshd[6064]: Accepted publickey for core from 10.200.16.10 port 37696 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:12.116673 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:12.120614 systemd-logind[1696]: New session 13 of user core. Jan 17 00:08:12.128708 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:08:12.288769 containerd[1715]: time="2026-01-17T00:08:12.288697710Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:12.291450 containerd[1715]: time="2026-01-17T00:08:12.291323031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:08:12.291450 containerd[1715]: time="2026-01-17T00:08:12.291401991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:08:12.291778 kubelet[3272]: E0117 00:08:12.291730 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:08:12.292095 kubelet[3272]: E0117 00:08:12.291788 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:08:12.292095 kubelet[3272]: E0117 00:08:12.291923 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r22zc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68ddb45bfc-grgqw_calico-system(e747a046-268c-4a51-81e2-3f445b48b5cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:12.294638 kubelet[3272]: E0117 00:08:12.294590 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:08:12.597695 sshd[6064]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:12.600955 systemd[1]: sshd@10-10.200.20.17:22-10.200.16.10:37696.service: Deactivated successfully. Jan 17 00:08:12.603477 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:08:12.606602 systemd-logind[1696]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:08:12.608366 systemd-logind[1696]: Removed session 13. Jan 17 00:08:12.704590 systemd[1]: Started sshd@11-10.200.20.17:22-10.200.16.10:37706.service - OpenSSH per-connection server daemon (10.200.16.10:37706). Jan 17 00:08:13.198142 sshd[6095]: Accepted publickey for core from 10.200.16.10 port 37706 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:13.199584 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:13.203875 systemd-logind[1696]: New session 14 of user core. Jan 17 00:08:13.208688 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:08:13.613437 sshd[6095]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:13.619504 systemd-logind[1696]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:08:13.619759 systemd[1]: sshd@11-10.200.20.17:22-10.200.16.10:37706.service: Deactivated successfully. Jan 17 00:08:13.621262 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:08:13.623440 systemd-logind[1696]: Removed session 14. Jan 17 00:08:14.044106 kubelet[3272]: E0117 00:08:14.044058 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:08:15.047811 kubelet[3272]: E0117 00:08:15.047649 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:08:18.044589 containerd[1715]: time="2026-01-17T00:08:18.044548535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:08:18.324516 containerd[1715]: time="2026-01-17T00:08:18.324135008Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:18.327914 containerd[1715]: time="2026-01-17T00:08:18.327878730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:08:18.327984 containerd[1715]: time="2026-01-17T00:08:18.327973450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:08:18.328150 kubelet[3272]: E0117 00:08:18.328109 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:08:18.328420 kubelet[3272]: E0117 00:08:18.328162 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:08:18.328420 kubelet[3272]: E0117 00:08:18.328361 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzksz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vtx75_calico-system(e2e5377f-9c87-4d0a-b448-a7595a3af9ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:18.329082 containerd[1715]: time="2026-01-17T00:08:18.328802250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:08:18.330559 kubelet[3272]: E0117 00:08:18.330354 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:08:18.654762 containerd[1715]: time="2026-01-17T00:08:18.654620182Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:18.659255 containerd[1715]: time="2026-01-17T00:08:18.659158464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:08:18.659255 containerd[1715]: time="2026-01-17T00:08:18.659225384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:08:18.659994 kubelet[3272]: E0117 00:08:18.659608 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:08:18.659994 kubelet[3272]: E0117 00:08:18.659658 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:08:18.660141 kubelet[3272]: E0117 00:08:18.660103 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:18.662319 containerd[1715]: time="2026-01-17T00:08:18.662219265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:08:18.701808 systemd[1]: Started sshd@12-10.200.20.17:22-10.200.16.10:37722.service - OpenSSH per-connection server daemon (10.200.16.10:37722). Jan 17 00:08:18.981685 containerd[1715]: time="2026-01-17T00:08:18.981632434Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:18.984068 containerd[1715]: time="2026-01-17T00:08:18.984023035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:08:18.984167 containerd[1715]: time="2026-01-17T00:08:18.984124235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:08:18.984368 kubelet[3272]: E0117 00:08:18.984328 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:08:18.984467 kubelet[3272]: E0117 00:08:18.984388 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:08:18.986399 kubelet[3272]: E0117 00:08:18.984508 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gm8_calico-system(8214a0c3-a0f7-40b6-915d-08cea6de347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:18.986695 kubelet[3272]: E0117 00:08:18.986642 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:08:19.174522 sshd[6127]: Accepted publickey for core from 10.200.16.10 port 37722 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:19.178178 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:19.183545 systemd-logind[1696]: New session 15 of user core. Jan 17 00:08:19.187740 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:08:19.593006 sshd[6127]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:19.599341 systemd[1]: sshd@12-10.200.20.17:22-10.200.16.10:37722.service: Deactivated successfully. Jan 17 00:08:19.601205 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:08:19.601898 systemd-logind[1696]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:08:19.602838 systemd-logind[1696]: Removed session 15. Jan 17 00:08:22.043994 kubelet[3272]: E0117 00:08:22.043625 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:08:24.681635 systemd[1]: Started sshd@13-10.200.20.17:22-10.200.16.10:34538.service - OpenSSH per-connection server daemon (10.200.16.10:34538). Jan 17 00:08:25.177138 sshd[6148]: Accepted publickey for core from 10.200.16.10 port 34538 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:25.179186 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:25.186417 systemd-logind[1696]: New session 16 of user core. Jan 17 00:08:25.191726 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:08:25.608181 sshd[6148]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:25.613617 systemd[1]: sshd@13-10.200.20.17:22-10.200.16.10:34538.service: Deactivated successfully. Jan 17 00:08:25.616641 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:08:25.617798 systemd-logind[1696]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:08:25.620879 systemd-logind[1696]: Removed session 16. Jan 17 00:08:26.046032 kubelet[3272]: E0117 00:08:26.045988 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:08:26.046406 containerd[1715]: time="2026-01-17T00:08:26.046089417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:08:26.352824 containerd[1715]: time="2026-01-17T00:08:26.352549903Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:26.355543 containerd[1715]: time="2026-01-17T00:08:26.355448104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:08:26.355543 containerd[1715]: time="2026-01-17T00:08:26.355519064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:08:26.355768 kubelet[3272]: E0117 00:08:26.355730 3272 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:26.355834 kubelet[3272]: E0117 00:08:26.355774 3272 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:26.356135 kubelet[3272]: E0117 00:08:26.355900 3272 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmd54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bf786874-qhq5d_calico-apiserver(9340ab9f-05b7-44f8-b60d-bcae76bd89d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:26.357186 kubelet[3272]: E0117 00:08:26.357129 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:08:30.045785 kubelet[3272]: E0117 00:08:30.045705 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:08:30.046184 kubelet[3272]: E0117 00:08:30.045770 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:08:30.691782 systemd[1]: Started sshd@14-10.200.20.17:22-10.200.16.10:59034.service - OpenSSH per-connection server daemon (10.200.16.10:59034). Jan 17 00:08:31.047001 kubelet[3272]: E0117 00:08:31.046351 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:08:31.145443 sshd[6161]: Accepted publickey for core from 10.200.16.10 port 59034 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:31.146449 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:31.150590 systemd-logind[1696]: New session 17 of user core. Jan 17 00:08:31.156725 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:08:31.565015 sshd[6161]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:31.569663 systemd-logind[1696]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:08:31.570722 systemd[1]: sshd@14-10.200.20.17:22-10.200.16.10:59034.service: Deactivated successfully. Jan 17 00:08:31.574151 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:08:31.576061 systemd-logind[1696]: Removed session 17. Jan 17 00:08:31.650881 systemd[1]: Started sshd@15-10.200.20.17:22-10.200.16.10:59048.service - OpenSSH per-connection server daemon (10.200.16.10:59048). Jan 17 00:08:32.099151 sshd[6174]: Accepted publickey for core from 10.200.16.10 port 59048 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:32.100448 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:32.106956 systemd-logind[1696]: New session 18 of user core. Jan 17 00:08:32.110474 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:08:33.339512 sshd[6174]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:33.342749 systemd-logind[1696]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:08:33.343652 systemd[1]: sshd@15-10.200.20.17:22-10.200.16.10:59048.service: Deactivated successfully. Jan 17 00:08:33.346928 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:08:33.350187 systemd-logind[1696]: Removed session 18. Jan 17 00:08:33.432747 systemd[1]: Started sshd@16-10.200.20.17:22-10.200.16.10:59052.service - OpenSSH per-connection server daemon (10.200.16.10:59052). Jan 17 00:08:33.921430 sshd[6184]: Accepted publickey for core from 10.200.16.10 port 59052 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:33.922872 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:33.926918 systemd-logind[1696]: New session 19 of user core. Jan 17 00:08:33.937671 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:08:35.038206 sshd[6184]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:35.041651 systemd-logind[1696]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:08:35.043347 systemd[1]: sshd@16-10.200.20.17:22-10.200.16.10:59052.service: Deactivated successfully. Jan 17 00:08:35.048378 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:08:35.050943 systemd-logind[1696]: Removed session 19. Jan 17 00:08:35.126780 systemd[1]: Started sshd@17-10.200.20.17:22-10.200.16.10:59060.service - OpenSSH per-connection server daemon (10.200.16.10:59060). Jan 17 00:08:35.574042 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 59060 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:35.575595 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:35.580626 systemd-logind[1696]: New session 20 of user core. Jan 17 00:08:35.585701 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:08:36.046354 kubelet[3272]: E0117 00:08:36.045996 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:08:36.123590 sshd[6207]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:36.126844 systemd-logind[1696]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:08:36.127661 systemd[1]: sshd@17-10.200.20.17:22-10.200.16.10:59060.service: Deactivated successfully. Jan 17 00:08:36.130020 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:08:36.131113 systemd-logind[1696]: Removed session 20. Jan 17 00:08:36.223133 systemd[1]: Started sshd@18-10.200.20.17:22-10.200.16.10:59070.service - OpenSSH per-connection server daemon (10.200.16.10:59070). Jan 17 00:08:36.713579 sshd[6218]: Accepted publickey for core from 10.200.16.10 port 59070 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:36.714777 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:36.722668 systemd-logind[1696]: New session 21 of user core. Jan 17 00:08:36.727710 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:08:37.048020 kubelet[3272]: E0117 00:08:37.047173 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:08:37.147450 sshd[6218]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:37.151001 systemd[1]: sshd@18-10.200.20.17:22-10.200.16.10:59070.service: Deactivated successfully. Jan 17 00:08:37.154804 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:08:37.156959 systemd-logind[1696]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:08:37.158092 systemd-logind[1696]: Removed session 21. Jan 17 00:08:38.045648 kubelet[3272]: E0117 00:08:38.044002 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:08:41.058782 kubelet[3272]: E0117 00:08:41.058704 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:08:42.239207 systemd[1]: Started sshd@19-10.200.20.17:22-10.200.16.10:37034.service - OpenSSH per-connection server daemon (10.200.16.10:37034). Jan 17 00:08:42.729552 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 37034 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:42.731190 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:42.736498 systemd-logind[1696]: New session 22 of user core. Jan 17 00:08:42.742684 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:08:43.050652 kubelet[3272]: E0117 00:08:43.050230 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:08:43.174501 sshd[6233]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:43.179973 systemd-logind[1696]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:08:43.181091 systemd[1]: sshd@19-10.200.20.17:22-10.200.16.10:37034.service: Deactivated successfully. Jan 17 00:08:43.183186 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:08:43.185375 systemd-logind[1696]: Removed session 22. Jan 17 00:08:45.046128 kubelet[3272]: E0117 00:08:45.046080 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:08:47.045780 kubelet[3272]: E0117 00:08:47.045357 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:08:48.043857 kubelet[3272]: E0117 00:08:48.043811 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:08:48.268435 systemd[1]: Started sshd@20-10.200.20.17:22-10.200.16.10:37046.service - OpenSSH per-connection server daemon (10.200.16.10:37046). Jan 17 00:08:48.753063 sshd[6270]: Accepted publickey for core from 10.200.16.10 port 37046 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:48.753930 sshd[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:48.757579 systemd-logind[1696]: New session 23 of user core. Jan 17 00:08:48.765695 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:08:49.184289 sshd[6270]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:49.190213 systemd[1]: sshd@20-10.200.20.17:22-10.200.16.10:37046.service: Deactivated successfully. Jan 17 00:08:49.193162 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:08:49.194717 systemd-logind[1696]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:08:49.195963 systemd-logind[1696]: Removed session 23. Jan 17 00:08:52.045033 kubelet[3272]: E0117 00:08:52.044922 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:08:54.045003 kubelet[3272]: E0117 00:08:54.044947 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e" Jan 17 00:08:54.271489 systemd[1]: Started sshd@21-10.200.20.17:22-10.200.16.10:47716.service - OpenSSH per-connection server daemon (10.200.16.10:47716). Jan 17 00:08:54.723707 sshd[6284]: Accepted publickey for core from 10.200.16.10 port 47716 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:54.725153 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:54.729194 systemd-logind[1696]: New session 24 of user core. Jan 17 00:08:54.739702 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:08:55.123848 sshd[6284]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:55.127029 systemd[1]: sshd@21-10.200.20.17:22-10.200.16.10:47716.service: Deactivated successfully. Jan 17 00:08:55.128689 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:08:55.129359 systemd-logind[1696]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:08:55.130151 systemd-logind[1696]: Removed session 24. Jan 17 00:08:56.044496 kubelet[3272]: E0117 00:08:56.044421 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b946cd94f-7mkrh" podUID="8057ab60-fa20-42e9-a7e5-844713387641" Jan 17 00:08:59.044824 kubelet[3272]: E0117 00:08:59.044775 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtx75" podUID="e2e5377f-9c87-4d0a-b448-a7595a3af9ad" Jan 17 00:08:59.046395 kubelet[3272]: E0117 00:08:59.045872 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-gphpw" podUID="6e9ff54d-9f3a-4f62-92e0-56921b0f16ea" Jan 17 00:08:59.046395 kubelet[3272]: E0117 00:08:59.045947 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68ddb45bfc-grgqw" podUID="e747a046-268c-4a51-81e2-3f445b48b5cd" Jan 17 00:09:00.205702 systemd[1]: Started sshd@22-10.200.20.17:22-10.200.16.10:56238.service - OpenSSH per-connection server daemon (10.200.16.10:56238). Jan 17 00:09:00.664483 sshd[6298]: Accepted publickey for core from 10.200.16.10 port 56238 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:00.666105 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:00.675422 systemd-logind[1696]: New session 25 of user core. Jan 17 00:09:00.678666 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:09:01.087400 sshd[6298]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:01.093804 systemd[1]: sshd@22-10.200.20.17:22-10.200.16.10:56238.service: Deactivated successfully. Jan 17 00:09:01.096951 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:09:01.099284 systemd-logind[1696]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:09:01.101657 systemd-logind[1696]: Removed session 25. Jan 17 00:09:06.044398 kubelet[3272]: E0117 00:09:06.044335 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bf786874-qhq5d" podUID="9340ab9f-05b7-44f8-b60d-bcae76bd89d3" Jan 17 00:09:06.180288 systemd[1]: Started sshd@23-10.200.20.17:22-10.200.16.10:56250.service - OpenSSH per-connection server daemon (10.200.16.10:56250). Jan 17 00:09:06.625066 sshd[6315]: Accepted publickey for core from 10.200.16.10 port 56250 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:06.627518 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:06.633919 systemd-logind[1696]: New session 26 of user core. Jan 17 00:09:06.638949 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:09:07.028903 sshd[6315]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:07.031552 systemd-logind[1696]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:09:07.033660 systemd[1]: sshd@23-10.200.20.17:22-10.200.16.10:56250.service: Deactivated successfully. Jan 17 00:09:07.036336 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:09:07.037787 systemd-logind[1696]: Removed session 26. Jan 17 00:09:08.045036 kubelet[3272]: E0117 00:09:08.044662 3272 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gm8" podUID="8214a0c3-a0f7-40b6-915d-08cea6de347e"