Jan 23 23:57:18.172322 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:57:18.172343 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:57:18.172351 kernel: KASLR enabled Jan 23 23:57:18.172357 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:57:18.172364 kernel: printk: bootconsole [pl11] enabled Jan 23 23:57:18.172370 kernel: efi: EFI v2.7 by EDK II Jan 23 23:57:18.172377 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:57:18.172383 kernel: random: crng init done Jan 23 23:57:18.172389 kernel: ACPI: Early table checksum verification disabled Jan 23 23:57:18.172395 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:57:18.172401 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172407 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172414 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:57:18.172420 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172427 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172434 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172440 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172448 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172454 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172461 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:57:18.172467 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172474 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:57:18.172480 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:57:18.172486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:57:18.172492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:57:18.172499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:57:18.172505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:57:18.172512 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:57:18.172519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:57:18.172526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:57:18.172532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:57:18.172539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:57:18.172545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:57:18.172551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:57:18.172557 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 23 23:57:18.172563 kernel: Zone ranges: Jan 23 23:57:18.172570 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:57:18.172576 kernel: DMA32 empty Jan 23 23:57:18.172582 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:57:18.172589 kernel: Movable zone start for each node Jan 23 23:57:18.172599 kernel: Early memory node ranges Jan 23 23:57:18.172606 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:57:18.172612 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:57:18.172619 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:57:18.172626 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:57:18.172634 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:57:18.172641 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:57:18.172647 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:57:18.172655 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:57:18.172664 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:57:18.172671 kernel: psci: probing for conduit method from ACPI. Jan 23 23:57:18.172678 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:57:18.172685 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:57:18.172693 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:57:18.172700 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:57:18.172708 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:57:18.172715 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:57:18.172725 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:57:18.172732 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:57:18.172740 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:57:18.172747 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:57:18.172754 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:57:18.172762 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:57:18.172769 kernel: CPU features: detected: Spectre-BHB Jan 23 23:57:18.172777 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:57:18.172784 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:57:18.172791 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:57:18.172798 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:57:18.172808 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:57:18.172816 kernel: alternatives: applying boot alternatives Jan 23 23:57:18.172825 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:57:18.172834 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:57:18.172842 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:57:18.172850 kernel: Fallback order for Node 0: 0 Jan 23 23:57:18.172857 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:57:18.174904 kernel: Policy zone: Normal Jan 23 23:57:18.174915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:57:18.174922 kernel: software IO TLB: area num 2. Jan 23 23:57:18.174929 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:57:18.174941 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 23 23:57:18.174948 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:57:18.174955 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:57:18.174963 kernel: rcu: RCU event tracing is enabled. Jan 23 23:57:18.174970 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:57:18.174977 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:57:18.174983 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:57:18.174990 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:57:18.174997 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:57:18.175004 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:57:18.175011 kernel: GICv3: 960 SPIs implemented Jan 23 23:57:18.175019 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:57:18.175026 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:57:18.175033 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:57:18.175039 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:57:18.175046 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:57:18.175053 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:57:18.175060 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:57:18.175067 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:57:18.175074 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:57:18.175081 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:57:18.175088 kernel: Console: colour dummy device 80x25 Jan 23 23:57:18.175096 kernel: printk: console [tty1] enabled Jan 23 23:57:18.175104 kernel: ACPI: Core revision 20230628 Jan 23 23:57:18.175111 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:57:18.175118 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:57:18.175125 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:57:18.175132 kernel: landlock: Up and running. Jan 23 23:57:18.175139 kernel: SELinux: Initializing. Jan 23 23:57:18.175146 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:57:18.175153 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:57:18.175161 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:57:18.175168 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:57:18.175175 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:57:18.175182 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:57:18.175189 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:57:18.175196 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:57:18.175203 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:57:18.175210 kernel: Remapping and enabling EFI services. Jan 23 23:57:18.175223 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:57:18.175230 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:57:18.175238 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:57:18.175245 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:57:18.175254 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:57:18.175261 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:57:18.175269 kernel: SMP: Total of 2 processors activated. Jan 23 23:57:18.175276 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:57:18.175284 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:57:18.175292 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:57:18.175300 kernel: CPU features: detected: CRC32 instructions Jan 23 23:57:18.175307 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:57:18.175314 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:57:18.175322 kernel: CPU features: detected: Privileged Access Never Jan 23 23:57:18.175329 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:57:18.175336 kernel: alternatives: applying system-wide alternatives Jan 23 23:57:18.175344 kernel: devtmpfs: initialized Jan 23 23:57:18.175351 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:57:18.175360 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:57:18.175368 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:57:18.175375 kernel: SMBIOS 3.1.0 present. Jan 23 23:57:18.175383 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:57:18.175390 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:57:18.175397 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:57:18.175405 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:57:18.175412 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:57:18.175419 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:57:18.175428 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:57:18.175436 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:57:18.175443 kernel: cpuidle: using governor menu Jan 23 23:57:18.175450 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:57:18.175458 kernel: ASID allocator initialised with 32768 entries Jan 23 23:57:18.175465 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:57:18.175472 kernel: Serial: AMBA PL011 UART driver Jan 23 23:57:18.175480 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:57:18.175487 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:57:18.175496 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:57:18.175503 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:57:18.175510 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:57:18.175518 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:57:18.175525 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:57:18.175532 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:57:18.175540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:57:18.175547 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:57:18.175554 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:57:18.175563 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:57:18.175570 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:57:18.175578 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:57:18.175585 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:57:18.175592 kernel: ACPI: Interpreter enabled Jan 23 23:57:18.175599 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:57:18.175607 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:57:18.175614 kernel: printk: console [ttyAMA0] enabled Jan 23 23:57:18.175621 kernel: printk: bootconsole [pl11] disabled Jan 23 23:57:18.175630 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:57:18.175637 kernel: iommu: Default domain type: Translated Jan 23 23:57:18.175645 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:57:18.175652 kernel: efivars: Registered efivars operations Jan 23 23:57:18.175659 kernel: vgaarb: loaded Jan 23 23:57:18.175667 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:57:18.175674 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:57:18.175681 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:57:18.175688 kernel: pnp: PnP ACPI init Jan 23 23:57:18.175697 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:57:18.175704 kernel: NET: Registered PF_INET protocol family Jan 23 23:57:18.175712 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:57:18.175719 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:57:18.175727 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:57:18.175734 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:57:18.175741 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:57:18.175749 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:57:18.175756 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:57:18.175765 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:57:18.175772 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:57:18.175780 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:57:18.175787 kernel: kvm [1]: HYP mode not available Jan 23 23:57:18.175794 kernel: Initialise system trusted keyrings Jan 23 23:57:18.175801 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:57:18.175809 kernel: Key type asymmetric registered Jan 23 23:57:18.175816 kernel: Asymmetric key parser 'x509' registered Jan 23 23:57:18.175823 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:57:18.175832 kernel: io scheduler mq-deadline registered Jan 23 23:57:18.175839 kernel: io scheduler kyber registered Jan 23 23:57:18.175846 kernel: io scheduler bfq registered Jan 23 23:57:18.175853 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:57:18.175868 kernel: thunder_xcv, ver 1.0 Jan 23 23:57:18.175877 kernel: thunder_bgx, ver 1.0 Jan 23 23:57:18.175884 kernel: nicpf, ver 1.0 Jan 23 23:57:18.175891 kernel: nicvf, ver 1.0 Jan 23 23:57:18.176034 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:57:18.176114 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:57:17 UTC (1769212637) Jan 23 23:57:18.176125 kernel: efifb: probing for efifb Jan 23 23:57:18.176133 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:57:18.176140 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:57:18.176148 kernel: efifb: scrolling: redraw Jan 23 23:57:18.176155 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:57:18.176162 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:57:18.176170 kernel: fb0: EFI VGA frame buffer device Jan 23 23:57:18.176179 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:57:18.176186 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:57:18.176194 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:57:18.176201 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:57:18.176208 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:57:18.176216 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:57:18.176223 kernel: Segment Routing with IPv6 Jan 23 23:57:18.176230 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:57:18.176238 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:57:18.176246 kernel: Key type dns_resolver registered Jan 23 23:57:18.176254 kernel: registered taskstats version 1 Jan 23 23:57:18.176261 kernel: Loading compiled-in X.509 certificates Jan 23 23:57:18.176269 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:57:18.176276 kernel: Key type .fscrypt registered Jan 23 23:57:18.176283 kernel: Key type fscrypt-provisioning registered Jan 23 23:57:18.176290 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:57:18.176297 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:57:18.176305 kernel: ima: No architecture policies found Jan 23 23:57:18.176313 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:57:18.176321 kernel: clk: Disabling unused clocks Jan 23 23:57:18.176328 kernel: Freeing unused kernel memory: 39424K Jan 23 23:57:18.176336 kernel: Run /init as init process Jan 23 23:57:18.176343 kernel: with arguments: Jan 23 23:57:18.176350 kernel: /init Jan 23 23:57:18.176357 kernel: with environment: Jan 23 23:57:18.176364 kernel: HOME=/ Jan 23 23:57:18.176371 kernel: TERM=linux Jan 23 23:57:18.176381 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:57:18.176392 systemd[1]: Detected virtualization microsoft. Jan 23 23:57:18.176400 systemd[1]: Detected architecture arm64. Jan 23 23:57:18.176407 systemd[1]: Running in initrd. Jan 23 23:57:18.176415 systemd[1]: No hostname configured, using default hostname. Jan 23 23:57:18.176422 systemd[1]: Hostname set to . Jan 23 23:57:18.176430 systemd[1]: Initializing machine ID from random generator. Jan 23 23:57:18.176440 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:57:18.176448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:57:18.176456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:57:18.176465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:57:18.176473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:57:18.176481 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:57:18.176489 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:57:18.176499 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:57:18.176508 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:57:18.176516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:57:18.176524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:57:18.176532 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:57:18.176541 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:57:18.176548 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:57:18.176556 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:57:18.176564 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:57:18.176574 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:57:18.176582 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:57:18.176590 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:57:18.176598 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:57:18.176606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:57:18.176614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:57:18.176622 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:57:18.176630 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:57:18.176640 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:57:18.176648 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:57:18.176656 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:57:18.176663 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:57:18.176687 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:57:18.176708 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:57:18.176717 systemd-journald[217]: Journal started Jan 23 23:57:18.176735 systemd-journald[217]: Runtime Journal (/run/log/journal/44cea81c1baf4f1eba62ce3021aaca22) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:57:18.183937 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:57:18.192108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:18.208977 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:57:18.209022 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:57:18.216312 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:57:18.222778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:57:18.237998 kernel: Bridge firewalling registered Jan 23 23:57:18.226977 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:57:18.233146 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:57:18.245622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:57:18.253476 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:18.270080 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:57:18.277005 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:57:18.300061 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:57:18.312198 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:57:18.320889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:57:18.335070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:57:18.340919 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:57:18.362310 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:57:18.370023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:57:18.376991 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:57:18.395606 dracut-cmdline[250]: dracut-dracut-053 Jan 23 23:57:18.402662 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:57:18.428508 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:57:18.442353 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:57:18.464637 systemd-resolved[269]: Positive Trust Anchors: Jan 23 23:57:18.464653 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:57:18.464685 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:57:18.470407 systemd-resolved[269]: Defaulting to hostname 'linux'. Jan 23 23:57:18.471251 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:57:18.482736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:57:18.545877 kernel: SCSI subsystem initialized Jan 23 23:57:18.553873 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:57:18.562881 kernel: iscsi: registered transport (tcp) Jan 23 23:57:18.578718 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:57:18.578748 kernel: QLogic iSCSI HBA Driver Jan 23 23:57:18.616755 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:57:18.628283 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:57:18.655798 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:57:18.655851 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:57:18.660889 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:57:18.708890 kernel: raid6: neonx8 gen() 15783 MB/s Jan 23 23:57:18.727871 kernel: raid6: neonx4 gen() 15688 MB/s Jan 23 23:57:18.746868 kernel: raid6: neonx2 gen() 13313 MB/s Jan 23 23:57:18.766869 kernel: raid6: neonx1 gen() 10489 MB/s Jan 23 23:57:18.785868 kernel: raid6: int64x8 gen() 6979 MB/s Jan 23 23:57:18.804872 kernel: raid6: int64x4 gen() 7353 MB/s Jan 23 23:57:18.824872 kernel: raid6: int64x2 gen() 6146 MB/s Jan 23 23:57:18.846191 kernel: raid6: int64x1 gen() 5072 MB/s Jan 23 23:57:18.846201 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Jan 23 23:57:18.868928 kernel: raid6: .... xor() 12046 MB/s, rmw enabled Jan 23 23:57:18.868939 kernel: raid6: using neon recovery algorithm Jan 23 23:57:18.879829 kernel: xor: measuring software checksum speed Jan 23 23:57:18.879842 kernel: 8regs : 19735 MB/sec Jan 23 23:57:18.882738 kernel: 32regs : 19664 MB/sec Jan 23 23:57:18.886695 kernel: arm64_neon : 27105 MB/sec Jan 23 23:57:18.890175 kernel: xor: using function: arm64_neon (27105 MB/sec) Jan 23 23:57:18.940082 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:57:18.949205 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:57:18.962978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:57:18.982043 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 23 23:57:18.986109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:57:19.000964 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:57:19.026294 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Jan 23 23:57:19.055233 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:57:19.067109 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:57:19.108021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:57:19.124047 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:57:19.147108 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:57:19.155969 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:57:19.169448 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:57:19.182176 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:57:19.203040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:57:19.224858 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:57:19.240150 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:57:19.241703 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:57:19.241768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:57:19.251884 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:57:19.257945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:57:19.306958 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:57:19.306987 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:57:19.306997 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:57:19.307007 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 23 23:57:19.307017 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:57:19.258005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:19.322127 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:57:19.291573 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:19.323110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:19.360417 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:57:19.360445 kernel: PTP clock support registered Jan 23 23:57:19.360456 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:57:19.360465 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 23 23:57:19.360476 kernel: scsi host1: storvsc_host_t Jan 23 23:57:19.360507 kernel: scsi host0: storvsc_host_t Jan 23 23:57:19.357976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:57:19.381397 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:57:19.381452 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:57:19.358094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:19.397127 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:57:19.390011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:19.408917 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:57:19.412876 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:57:19.415874 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:57:19.415899 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:57:19.594369 systemd-resolved[269]: Clock change detected. Flushing caches. Jan 23 23:57:19.604999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:19.620968 kernel: hv_netvsc 7ced8d87-92e0-7ced-8d87-92e07ced8d87 eth0: VF slot 1 added Jan 23 23:57:19.622150 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:57:19.624902 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:57:19.645514 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:57:19.645537 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:57:19.645546 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:57:19.658961 kernel: hv_pci 79155615-6f80-4910-bcee-1f011ee5b8af: PCI VMBus probing: Using version 0x10004 Jan 23 23:57:19.659162 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:57:19.671267 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:57:19.671460 kernel: hv_pci 79155615-6f80-4910-bcee-1f011ee5b8af: PCI host bridge to bus 6f80:00 Jan 23 23:57:19.671553 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:57:19.678488 kernel: pci_bus 6f80:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:57:19.678676 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:57:19.689586 kernel: pci_bus 6f80:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:57:19.689724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:57:19.689822 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:57:19.698723 kernel: pci 6f80:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:57:19.699052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:57:19.720963 kernel: pci 6f80:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:57:19.732616 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:57:19.732659 kernel: pci 6f80:00:02.0: enabling Extended Tags Jan 23 23:57:19.732688 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:57:19.742957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:57:19.743134 kernel: pci 6f80:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6f80:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:57:19.762405 kernel: pci_bus 6f80:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:57:19.762582 kernel: pci 6f80:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:57:19.802268 kernel: mlx5_core 6f80:00:02.0: enabling device (0000 -> 0002) Jan 23 23:57:19.807952 kernel: mlx5_core 6f80:00:02.0: firmware version: 16.30.5026 Jan 23 23:57:20.004141 kernel: hv_netvsc 7ced8d87-92e0-7ced-8d87-92e07ced8d87 eth0: VF registering: eth1 Jan 23 23:57:20.004345 kernel: mlx5_core 6f80:00:02.0 eth1: joined to eth0 Jan 23 23:57:20.011018 kernel: mlx5_core 6f80:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:57:20.019962 kernel: mlx5_core 6f80:00:02.0 enP28544s1: renamed from eth1 Jan 23 23:57:20.238970 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (503) Jan 23 23:57:20.253649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:57:20.277170 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:57:20.301140 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (493) Jan 23 23:57:20.304230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:57:20.323256 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:57:20.328908 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:57:20.356157 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:57:20.380512 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:57:20.386973 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:57:21.397991 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:57:21.398228 disk-uuid[609]: The operation has completed successfully. Jan 23 23:57:21.458692 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:57:21.459970 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:57:21.502076 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:57:21.512074 sh[722]: Success Jan 23 23:57:21.538003 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:57:21.899555 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:57:21.907070 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:57:21.917975 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:57:21.939843 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:57:21.939870 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:57:21.945434 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:57:21.949584 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:57:21.952754 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:57:22.274151 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:57:22.278452 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:57:22.293194 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:57:22.302130 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:57:22.328175 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:22.328235 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:57:22.332108 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:57:22.369966 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:57:22.378796 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:57:22.387762 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:22.398975 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:57:22.405276 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:57:22.422266 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:57:22.433018 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:57:22.459485 systemd-networkd[906]: lo: Link UP Jan 23 23:57:22.459493 systemd-networkd[906]: lo: Gained carrier Jan 23 23:57:22.461124 systemd-networkd[906]: Enumeration completed Jan 23 23:57:22.461641 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:57:22.461644 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:57:22.465221 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:57:22.470309 systemd[1]: Reached target network.target - Network. Jan 23 23:57:22.536958 kernel: mlx5_core 6f80:00:02.0 enP28544s1: Link up Jan 23 23:57:22.572554 systemd-networkd[906]: enP28544s1: Link UP Jan 23 23:57:22.575965 kernel: hv_netvsc 7ced8d87-92e0-7ced-8d87-92e07ced8d87 eth0: Data path switched to VF: enP28544s1 Jan 23 23:57:22.572652 systemd-networkd[906]: eth0: Link UP Jan 23 23:57:22.572781 systemd-networkd[906]: eth0: Gained carrier Jan 23 23:57:22.572791 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:57:22.583147 systemd-networkd[906]: enP28544s1: Gained carrier Jan 23 23:57:22.602000 systemd-networkd[906]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:57:23.383612 ignition[905]: Ignition 2.19.0 Jan 23 23:57:23.383624 ignition[905]: Stage: fetch-offline Jan 23 23:57:23.386611 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:57:23.383660 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:23.383668 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:23.383780 ignition[905]: parsed url from cmdline: "" Jan 23 23:57:23.383783 ignition[905]: no config URL provided Jan 23 23:57:23.410078 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:57:23.383788 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:57:23.383795 ignition[905]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:57:23.383799 ignition[905]: failed to fetch config: resource requires networking Jan 23 23:57:23.383968 ignition[905]: Ignition finished successfully Jan 23 23:57:23.427202 ignition[922]: Ignition 2.19.0 Jan 23 23:57:23.427208 ignition[922]: Stage: fetch Jan 23 23:57:23.427363 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:23.427372 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:23.427452 ignition[922]: parsed url from cmdline: "" Jan 23 23:57:23.427455 ignition[922]: no config URL provided Jan 23 23:57:23.427459 ignition[922]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:57:23.427466 ignition[922]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:57:23.427486 ignition[922]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:57:23.509422 ignition[922]: GET result: OK Jan 23 23:57:23.509511 ignition[922]: config has been read from IMDS userdata Jan 23 23:57:23.509590 ignition[922]: parsing config with SHA512: 80b2b88befd4fcee4f52a98d00f50ece8fbcc17ace48c6695f7c011577a3bbd0f696074d5fa2fe30a46f835a60da3db8b180dffbfb9c8998118b5dfc5164c3a3 Jan 23 23:57:23.513546 unknown[922]: fetched base config from "system" Jan 23 23:57:23.513916 ignition[922]: fetch: fetch complete Jan 23 23:57:23.513554 unknown[922]: fetched base config from "system" Jan 23 23:57:23.513921 ignition[922]: fetch: fetch passed Jan 23 23:57:23.513559 unknown[922]: fetched user config from "azure" Jan 23 23:57:23.513980 ignition[922]: Ignition finished successfully Jan 23 23:57:23.516954 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:57:23.544010 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:57:23.555329 ignition[928]: Ignition 2.19.0 Jan 23 23:57:23.555337 ignition[928]: Stage: kargs Jan 23 23:57:23.559100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:57:23.555492 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:23.555500 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:23.556342 ignition[928]: kargs: kargs passed Jan 23 23:57:23.556385 ignition[928]: Ignition finished successfully Jan 23 23:57:23.579198 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:57:23.600799 ignition[934]: Ignition 2.19.0 Jan 23 23:57:23.600811 ignition[934]: Stage: disks Jan 23 23:57:23.601030 ignition[934]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:23.605096 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:57:23.601040 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:23.613815 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:57:23.602415 ignition[934]: disks: disks passed Jan 23 23:57:23.622248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:57:23.603335 ignition[934]: Ignition finished successfully Jan 23 23:57:23.631874 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:57:23.640665 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:57:23.647767 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:57:23.669177 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:57:23.748975 systemd-fsck[942]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:57:23.757410 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:57:23.772127 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:57:23.798169 systemd-networkd[906]: eth0: Gained IPv6LL Jan 23 23:57:23.823973 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:57:23.824787 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:57:23.828981 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:57:23.871009 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:57:23.889954 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (953) Jan 23 23:57:23.890748 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:57:23.914831 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:23.914854 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:57:23.900538 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:57:23.928833 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:57:23.914114 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:57:23.914145 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:57:23.958253 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:57:23.925042 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:57:23.936148 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:57:23.954628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:57:24.386833 coreos-metadata[955]: Jan 23 23:57:24.386 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:57:24.395567 coreos-metadata[955]: Jan 23 23:57:24.395 INFO Fetch successful Jan 23 23:57:24.395567 coreos-metadata[955]: Jan 23 23:57:24.395 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:57:24.409339 coreos-metadata[955]: Jan 23 23:57:24.407 INFO Fetch successful Jan 23 23:57:24.425108 coreos-metadata[955]: Jan 23 23:57:24.425 INFO wrote hostname ci-4081.3.6-n-95a9bf6543 to /sysroot/etc/hostname Jan 23 23:57:24.426395 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:57:24.595228 initrd-setup-root[982]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:57:24.616528 initrd-setup-root[989]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:57:24.638593 initrd-setup-root[996]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:57:24.645975 initrd-setup-root[1003]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:57:25.901477 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:57:25.913302 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:57:25.921110 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:57:25.939645 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:25.935575 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:57:25.961448 ignition[1070]: INFO : Ignition 2.19.0 Jan 23 23:57:25.966153 ignition[1070]: INFO : Stage: mount Jan 23 23:57:25.966153 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:25.966153 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:25.985252 ignition[1070]: INFO : mount: mount passed Jan 23 23:57:25.985252 ignition[1070]: INFO : Ignition finished successfully Jan 23 23:57:25.970102 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:57:25.987176 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:57:25.998192 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:57:26.016286 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:57:26.036977 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1082) Jan 23 23:57:26.047457 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:26.047485 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:57:26.050821 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:57:26.058778 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:57:26.059459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:57:26.086533 ignition[1099]: INFO : Ignition 2.19.0 Jan 23 23:57:26.086533 ignition[1099]: INFO : Stage: files Jan 23 23:57:26.092968 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:26.092968 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:26.092968 ignition[1099]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:57:26.114441 ignition[1099]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:57:26.114441 ignition[1099]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:57:26.202485 ignition[1099]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:57:26.208738 ignition[1099]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:57:26.208738 ignition[1099]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:57:26.202857 unknown[1099]: wrote ssh authorized keys file for user: core Jan 23 23:57:26.229664 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:57:26.238229 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:57:26.273565 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:57:26.405404 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:57:26.801594 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:57:27.040976 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:57:27.040976 ignition[1099]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: files passed Jan 23 23:57:27.060450 ignition[1099]: INFO : Ignition finished successfully Jan 23 23:57:27.055986 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:57:27.079715 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:57:27.089117 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:57:27.105362 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:57:27.157927 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:57:27.157927 initrd-setup-root-after-ignition[1127]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:57:27.105449 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:57:27.179934 initrd-setup-root-after-ignition[1131]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:57:27.143991 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:57:27.154382 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:57:27.182124 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:57:27.213376 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:57:27.214974 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:57:27.223284 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:57:27.232610 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:57:27.240841 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:57:27.257176 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:57:27.270583 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:57:27.286115 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:57:27.304910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:57:27.315554 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:57:27.320658 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:57:27.329295 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:57:27.329362 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:57:27.342098 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:57:27.351328 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:57:27.360450 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:57:27.369371 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:57:27.378481 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:57:27.387881 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:57:27.396480 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:57:27.406643 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:57:27.416606 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:57:27.424960 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:57:27.432546 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:57:27.432607 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:57:27.444281 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:57:27.454218 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:57:27.463850 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:57:27.463890 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:57:27.473959 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:57:27.474017 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:57:27.488430 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:57:27.488492 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:57:27.497441 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:57:27.497483 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:57:27.505738 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:57:27.505775 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:57:27.530147 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:57:27.553796 ignition[1153]: INFO : Ignition 2.19.0 Jan 23 23:57:27.553796 ignition[1153]: INFO : Stage: umount Jan 23 23:57:27.556127 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:57:27.582711 ignition[1153]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:27.582711 ignition[1153]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:27.582711 ignition[1153]: INFO : umount: umount passed Jan 23 23:57:27.582711 ignition[1153]: INFO : Ignition finished successfully Jan 23 23:57:27.560270 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:57:27.560334 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:57:27.565646 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:57:27.565683 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:57:27.582492 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:57:27.586013 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:57:27.598573 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:57:27.599047 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:57:27.599150 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:57:27.606827 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:57:27.607209 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:57:27.615273 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:57:27.615320 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:57:27.623334 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:57:27.623372 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:57:27.632020 systemd[1]: Stopped target network.target - Network. Jan 23 23:57:27.640379 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:57:27.640439 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:57:27.649682 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:57:27.658303 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:57:27.661994 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:57:27.668438 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:57:27.676517 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:57:27.684499 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:57:27.684551 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:57:27.697893 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:57:27.697963 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:57:27.707680 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:57:27.707741 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:57:27.715820 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:57:27.715864 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:57:27.725240 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:57:27.729665 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:57:27.741435 systemd-networkd[906]: eth0: DHCPv6 lease lost Jan 23 23:57:27.892714 kernel: hv_netvsc 7ced8d87-92e0-7ced-8d87-92e07ced8d87 eth0: Data path switched from VF: enP28544s1 Jan 23 23:57:27.743136 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:57:27.743309 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:57:27.753802 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:57:27.753840 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:57:27.777104 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:57:27.787600 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:57:27.787668 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:57:27.798087 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:57:27.810218 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:57:27.810335 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:57:27.836330 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:57:27.837467 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:57:27.846633 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:57:27.846704 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:57:27.854390 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:57:27.854426 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:57:27.864720 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:57:27.864767 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:57:27.885109 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:57:27.885160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:57:27.892784 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:57:27.892833 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:57:27.922655 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:57:27.929551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:57:27.929617 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:57:27.937511 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:57:27.937559 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:57:27.947721 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:57:27.947762 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:57:27.958884 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:57:27.958927 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:57:27.972165 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:57:27.972210 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:57:27.980969 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:57:27.981003 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:57:27.991460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:57:27.991495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:28.001256 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:57:28.001353 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:57:28.009545 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:57:28.009625 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:57:28.017867 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:57:28.017952 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:57:28.028386 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:57:28.037127 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:57:28.037200 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:57:28.069217 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:57:28.086482 systemd[1]: Switching root. Jan 23 23:57:28.201733 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 23 23:57:28.201780 systemd-journald[217]: Journal stopped Jan 23 23:57:18.172322 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:57:18.172343 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:57:18.172351 kernel: KASLR enabled Jan 23 23:57:18.172357 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:57:18.172364 kernel: printk: bootconsole [pl11] enabled Jan 23 23:57:18.172370 kernel: efi: EFI v2.7 by EDK II Jan 23 23:57:18.172377 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:57:18.172383 kernel: random: crng init done Jan 23 23:57:18.172389 kernel: ACPI: Early table checksum verification disabled Jan 23 23:57:18.172395 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:57:18.172401 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172407 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172414 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:57:18.172420 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172427 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172434 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172440 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172448 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172454 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172461 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:57:18.172467 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:57:18.172474 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:57:18.172480 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:57:18.172486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:57:18.172492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:57:18.172499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:57:18.172505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:57:18.172512 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:57:18.172519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:57:18.172526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:57:18.172532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:57:18.172539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:57:18.172545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:57:18.172551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:57:18.172557 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 23 23:57:18.172563 kernel: Zone ranges: Jan 23 23:57:18.172570 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:57:18.172576 kernel: DMA32 empty Jan 23 23:57:18.172582 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:57:18.172589 kernel: Movable zone start for each node Jan 23 23:57:18.172599 kernel: Early memory node ranges Jan 23 23:57:18.172606 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:57:18.172612 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:57:18.172619 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:57:18.172626 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:57:18.172634 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:57:18.172641 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:57:18.172647 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:57:18.172655 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:57:18.172664 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:57:18.172671 kernel: psci: probing for conduit method from ACPI. Jan 23 23:57:18.172678 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:57:18.172685 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:57:18.172693 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:57:18.172700 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:57:18.172708 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:57:18.172715 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:57:18.172725 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:57:18.172732 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:57:18.172740 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:57:18.172747 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:57:18.172754 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:57:18.172762 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:57:18.172769 kernel: CPU features: detected: Spectre-BHB Jan 23 23:57:18.172777 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:57:18.172784 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:57:18.172791 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:57:18.172798 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:57:18.172808 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:57:18.172816 kernel: alternatives: applying boot alternatives Jan 23 23:57:18.172825 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:57:18.172834 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:57:18.172842 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:57:18.172850 kernel: Fallback order for Node 0: 0 Jan 23 23:57:18.172857 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:57:18.174904 kernel: Policy zone: Normal Jan 23 23:57:18.174915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:57:18.174922 kernel: software IO TLB: area num 2. Jan 23 23:57:18.174929 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:57:18.174941 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 23 23:57:18.174948 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:57:18.174955 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:57:18.174963 kernel: rcu: RCU event tracing is enabled. Jan 23 23:57:18.174970 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:57:18.174977 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:57:18.174983 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:57:18.174990 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:57:18.174997 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:57:18.175004 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:57:18.175011 kernel: GICv3: 960 SPIs implemented Jan 23 23:57:18.175019 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:57:18.175026 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:57:18.175033 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:57:18.175039 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:57:18.175046 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:57:18.175053 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:57:18.175060 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:57:18.175067 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:57:18.175074 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:57:18.175081 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:57:18.175088 kernel: Console: colour dummy device 80x25 Jan 23 23:57:18.175096 kernel: printk: console [tty1] enabled Jan 23 23:57:18.175104 kernel: ACPI: Core revision 20230628 Jan 23 23:57:18.175111 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:57:18.175118 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:57:18.175125 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:57:18.175132 kernel: landlock: Up and running. Jan 23 23:57:18.175139 kernel: SELinux: Initializing. Jan 23 23:57:18.175146 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:57:18.175153 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:57:18.175161 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:57:18.175168 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:57:18.175175 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:57:18.175182 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:57:18.175189 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:57:18.175196 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:57:18.175203 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:57:18.175210 kernel: Remapping and enabling EFI services. Jan 23 23:57:18.175223 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:57:18.175230 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:57:18.175238 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:57:18.175245 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:57:18.175254 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:57:18.175261 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:57:18.175269 kernel: SMP: Total of 2 processors activated. Jan 23 23:57:18.175276 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:57:18.175284 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:57:18.175292 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:57:18.175300 kernel: CPU features: detected: CRC32 instructions Jan 23 23:57:18.175307 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:57:18.175314 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:57:18.175322 kernel: CPU features: detected: Privileged Access Never Jan 23 23:57:18.175329 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:57:18.175336 kernel: alternatives: applying system-wide alternatives Jan 23 23:57:18.175344 kernel: devtmpfs: initialized Jan 23 23:57:18.175351 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:57:18.175360 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:57:18.175368 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:57:18.175375 kernel: SMBIOS 3.1.0 present. Jan 23 23:57:18.175383 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:57:18.175390 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:57:18.175397 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:57:18.175405 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:57:18.175412 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:57:18.175419 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:57:18.175428 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:57:18.175436 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:57:18.175443 kernel: cpuidle: using governor menu Jan 23 23:57:18.175450 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:57:18.175458 kernel: ASID allocator initialised with 32768 entries Jan 23 23:57:18.175465 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:57:18.175472 kernel: Serial: AMBA PL011 UART driver Jan 23 23:57:18.175480 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:57:18.175487 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:57:18.175496 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:57:18.175503 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:57:18.175510 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:57:18.175518 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:57:18.175525 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:57:18.175532 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:57:18.175540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:57:18.175547 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:57:18.175554 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:57:18.175563 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:57:18.175570 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:57:18.175578 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:57:18.175585 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:57:18.175592 kernel: ACPI: Interpreter enabled Jan 23 23:57:18.175599 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:57:18.175607 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:57:18.175614 kernel: printk: console [ttyAMA0] enabled Jan 23 23:57:18.175621 kernel: printk: bootconsole [pl11] disabled Jan 23 23:57:18.175630 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:57:18.175637 kernel: iommu: Default domain type: Translated Jan 23 23:57:18.175645 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:57:18.175652 kernel: efivars: Registered efivars operations Jan 23 23:57:18.175659 kernel: vgaarb: loaded Jan 23 23:57:18.175667 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:57:18.175674 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:57:18.175681 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:57:18.175688 kernel: pnp: PnP ACPI init Jan 23 23:57:18.175697 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:57:18.175704 kernel: NET: Registered PF_INET protocol family Jan 23 23:57:18.175712 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:57:18.175719 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:57:18.175727 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:57:18.175734 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:57:18.175741 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:57:18.175749 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:57:18.175756 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:57:18.175765 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:57:18.175772 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:57:18.175780 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:57:18.175787 kernel: kvm [1]: HYP mode not available Jan 23 23:57:18.175794 kernel: Initialise system trusted keyrings Jan 23 23:57:18.175801 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:57:18.175809 kernel: Key type asymmetric registered Jan 23 23:57:18.175816 kernel: Asymmetric key parser 'x509' registered Jan 23 23:57:18.175823 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:57:18.175832 kernel: io scheduler mq-deadline registered Jan 23 23:57:18.175839 kernel: io scheduler kyber registered Jan 23 23:57:18.175846 kernel: io scheduler bfq registered Jan 23 23:57:18.175853 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:57:18.175868 kernel: thunder_xcv, ver 1.0 Jan 23 23:57:18.175877 kernel: thunder_bgx, ver 1.0 Jan 23 23:57:18.175884 kernel: nicpf, ver 1.0 Jan 23 23:57:18.175891 kernel: nicvf, ver 1.0 Jan 23 23:57:18.176034 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:57:18.176114 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:57:17 UTC (1769212637) Jan 23 23:57:18.176125 kernel: efifb: probing for efifb Jan 23 23:57:18.176133 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:57:18.176140 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:57:18.176148 kernel: efifb: scrolling: redraw Jan 23 23:57:18.176155 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:57:18.176162 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:57:18.176170 kernel: fb0: EFI VGA frame buffer device Jan 23 23:57:18.176179 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:57:18.176186 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:57:18.176194 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:57:18.176201 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:57:18.176208 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:57:18.176216 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:57:18.176223 kernel: Segment Routing with IPv6 Jan 23 23:57:18.176230 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:57:18.176238 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:57:18.176246 kernel: Key type dns_resolver registered Jan 23 23:57:18.176254 kernel: registered taskstats version 1 Jan 23 23:57:18.176261 kernel: Loading compiled-in X.509 certificates Jan 23 23:57:18.176269 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:57:18.176276 kernel: Key type .fscrypt registered Jan 23 23:57:18.176283 kernel: Key type fscrypt-provisioning registered Jan 23 23:57:18.176290 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:57:18.176297 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:57:18.176305 kernel: ima: No architecture policies found Jan 23 23:57:18.176313 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:57:18.176321 kernel: clk: Disabling unused clocks Jan 23 23:57:18.176328 kernel: Freeing unused kernel memory: 39424K Jan 23 23:57:18.176336 kernel: Run /init as init process Jan 23 23:57:18.176343 kernel: with arguments: Jan 23 23:57:18.176350 kernel: /init Jan 23 23:57:18.176357 kernel: with environment: Jan 23 23:57:18.176364 kernel: HOME=/ Jan 23 23:57:18.176371 kernel: TERM=linux Jan 23 23:57:18.176381 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:57:18.176392 systemd[1]: Detected virtualization microsoft. Jan 23 23:57:18.176400 systemd[1]: Detected architecture arm64. Jan 23 23:57:18.176407 systemd[1]: Running in initrd. Jan 23 23:57:18.176415 systemd[1]: No hostname configured, using default hostname. Jan 23 23:57:18.176422 systemd[1]: Hostname set to . Jan 23 23:57:18.176430 systemd[1]: Initializing machine ID from random generator. Jan 23 23:57:18.176440 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:57:18.176448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:57:18.176456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:57:18.176465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:57:18.176473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:57:18.176481 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:57:18.176489 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:57:18.176499 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:57:18.176508 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:57:18.176516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:57:18.176524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:57:18.176532 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:57:18.176541 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:57:18.176548 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:57:18.176556 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:57:18.176564 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:57:18.176574 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:57:18.176582 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:57:18.176590 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:57:18.176598 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:57:18.176606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:57:18.176614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:57:18.176622 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:57:18.176630 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:57:18.176640 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:57:18.176648 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:57:18.176656 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:57:18.176663 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:57:18.176687 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:57:18.176708 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:57:18.176717 systemd-journald[217]: Journal started Jan 23 23:57:18.176735 systemd-journald[217]: Runtime Journal (/run/log/journal/44cea81c1baf4f1eba62ce3021aaca22) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:57:18.183937 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:57:18.192108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:18.208977 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:57:18.209022 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:57:18.216312 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:57:18.222778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:57:18.237998 kernel: Bridge firewalling registered Jan 23 23:57:18.226977 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:57:18.233146 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:57:18.245622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:57:18.253476 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:18.270080 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:57:18.277005 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:57:18.300061 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:57:18.312198 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:57:18.320889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:57:18.335070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:57:18.340919 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:57:18.362310 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:57:18.370023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:57:18.376991 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:57:18.395606 dracut-cmdline[250]: dracut-dracut-053 Jan 23 23:57:18.402662 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:57:18.428508 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:57:18.442353 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:57:18.464637 systemd-resolved[269]: Positive Trust Anchors: Jan 23 23:57:18.464653 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:57:18.464685 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:57:18.470407 systemd-resolved[269]: Defaulting to hostname 'linux'. Jan 23 23:57:18.471251 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:57:18.482736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:57:18.545877 kernel: SCSI subsystem initialized Jan 23 23:57:18.553873 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:57:18.562881 kernel: iscsi: registered transport (tcp) Jan 23 23:57:18.578718 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:57:18.578748 kernel: QLogic iSCSI HBA Driver Jan 23 23:57:18.616755 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:57:18.628283 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:57:18.655798 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:57:18.655851 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:57:18.660889 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:57:18.708890 kernel: raid6: neonx8 gen() 15783 MB/s Jan 23 23:57:18.727871 kernel: raid6: neonx4 gen() 15688 MB/s Jan 23 23:57:18.746868 kernel: raid6: neonx2 gen() 13313 MB/s Jan 23 23:57:18.766869 kernel: raid6: neonx1 gen() 10489 MB/s Jan 23 23:57:18.785868 kernel: raid6: int64x8 gen() 6979 MB/s Jan 23 23:57:18.804872 kernel: raid6: int64x4 gen() 7353 MB/s Jan 23 23:57:18.824872 kernel: raid6: int64x2 gen() 6146 MB/s Jan 23 23:57:18.846191 kernel: raid6: int64x1 gen() 5072 MB/s Jan 23 23:57:18.846201 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Jan 23 23:57:18.868928 kernel: raid6: .... xor() 12046 MB/s, rmw enabled Jan 23 23:57:18.868939 kernel: raid6: using neon recovery algorithm Jan 23 23:57:18.879829 kernel: xor: measuring software checksum speed Jan 23 23:57:18.879842 kernel: 8regs : 19735 MB/sec Jan 23 23:57:18.882738 kernel: 32regs : 19664 MB/sec Jan 23 23:57:18.886695 kernel: arm64_neon : 27105 MB/sec Jan 23 23:57:18.890175 kernel: xor: using function: arm64_neon (27105 MB/sec) Jan 23 23:57:18.940082 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:57:18.949205 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:57:18.962978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:57:18.982043 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 23 23:57:18.986109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:57:19.000964 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:57:19.026294 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Jan 23 23:57:19.055233 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:57:19.067109 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:57:19.108021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:57:19.124047 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:57:19.147108 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:57:19.155969 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:57:19.169448 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:57:19.182176 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:57:19.203040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:57:19.224858 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:57:19.240150 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:57:19.241703 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:57:19.241768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:57:19.251884 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:57:19.257945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:57:19.306958 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:57:19.306987 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:57:19.306997 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:57:19.307007 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 23 23:57:19.307017 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:57:19.258005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:19.322127 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:57:19.291573 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:19.323110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:19.360417 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:57:19.360445 kernel: PTP clock support registered Jan 23 23:57:19.360456 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:57:19.360465 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 23 23:57:19.360476 kernel: scsi host1: storvsc_host_t Jan 23 23:57:19.360507 kernel: scsi host0: storvsc_host_t Jan 23 23:57:19.357976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:57:19.381397 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:57:19.381452 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:57:19.358094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:19.397127 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:57:19.390011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:19.408917 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:57:19.412876 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:57:19.415874 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:57:19.415899 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:57:19.594369 systemd-resolved[269]: Clock change detected. Flushing caches. Jan 23 23:57:19.604999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:19.620968 kernel: hv_netvsc 7ced8d87-92e0-7ced-8d87-92e07ced8d87 eth0: VF slot 1 added Jan 23 23:57:19.622150 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:57:19.624902 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:57:19.645514 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:57:19.645537 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:57:19.645546 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:57:19.658961 kernel: hv_pci 79155615-6f80-4910-bcee-1f011ee5b8af: PCI VMBus probing: Using version 0x10004 Jan 23 23:57:19.659162 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:57:19.671267 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:57:19.671460 kernel: hv_pci 79155615-6f80-4910-bcee-1f011ee5b8af: PCI host bridge to bus 6f80:00 Jan 23 23:57:19.671553 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:57:19.678488 kernel: pci_bus 6f80:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:57:19.678676 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:57:19.689586 kernel: pci_bus 6f80:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:57:19.689724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:57:19.689822 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:57:19.698723 kernel: pci 6f80:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:57:19.699052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:57:19.720963 kernel: pci 6f80:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:57:19.732616 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:57:19.732659 kernel: pci 6f80:00:02.0: enabling Extended Tags Jan 23 23:57:19.732688 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:57:19.742957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#113 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:57:19.743134 kernel: pci 6f80:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6f80:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:57:19.762405 kernel: pci_bus 6f80:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:57:19.762582 kernel: pci 6f80:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:57:19.802268 kernel: mlx5_core 6f80:00:02.0: enabling device (0000 -> 0002) Jan 23 23:57:19.807952 kernel: mlx5_core 6f80:00:02.0: firmware version: 16.30.5026 Jan 23 23:57:20.004141 kernel: hv_netvsc 7ced8d87-92e0-7ced-8d87-92e07ced8d87 eth0: VF registering: eth1 Jan 23 23:57:20.004345 kernel: mlx5_core 6f80:00:02.0 eth1: joined to eth0 Jan 23 23:57:20.011018 kernel: mlx5_core 6f80:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:57:20.019962 kernel: mlx5_core 6f80:00:02.0 enP28544s1: renamed from eth1 Jan 23 23:57:20.238970 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (503) Jan 23 23:57:20.253649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:57:20.277170 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:57:20.301140 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (493) Jan 23 23:57:20.304230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:57:20.323256 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:57:20.328908 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:57:20.356157 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:57:20.380512 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:57:20.386973 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:57:21.397991 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:57:21.398228 disk-uuid[609]: The operation has completed successfully. Jan 23 23:57:21.458692 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:57:21.459970 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:57:21.502076 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:57:21.512074 sh[722]: Success Jan 23 23:57:21.538003 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:57:21.899555 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:57:21.907070 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:57:21.917975 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:57:21.939843 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:57:21.939870 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:57:21.945434 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:57:21.949584 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:57:21.952754 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:57:22.274151 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:57:22.278452 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:57:22.293194 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:57:22.302130 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:57:22.328175 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:22.328235 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:57:22.332108 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:57:22.369966 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:57:22.378796 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:57:22.387762 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:22.398975 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:57:22.405276 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:57:22.422266 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:57:22.433018 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:57:22.459485 systemd-networkd[906]: lo: Link UP Jan 23 23:57:22.459493 systemd-networkd[906]: lo: Gained carrier Jan 23 23:57:22.461124 systemd-networkd[906]: Enumeration completed Jan 23 23:57:22.461641 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:57:22.461644 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:57:22.465221 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:57:22.470309 systemd[1]: Reached target network.target - Network. Jan 23 23:57:22.536958 kernel: mlx5_core 6f80:00:02.0 enP28544s1: Link up Jan 23 23:57:22.572554 systemd-networkd[906]: enP28544s1: Link UP Jan 23 23:57:22.575965 kernel: hv_netvsc 7ced8d87-92e0-7ced-8d87-92e07ced8d87 eth0: Data path switched to VF: enP28544s1 Jan 23 23:57:22.572652 systemd-networkd[906]: eth0: Link UP Jan 23 23:57:22.572781 systemd-networkd[906]: eth0: Gained carrier Jan 23 23:57:22.572791 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:57:22.583147 systemd-networkd[906]: enP28544s1: Gained carrier Jan 23 23:57:22.602000 systemd-networkd[906]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:57:23.383612 ignition[905]: Ignition 2.19.0 Jan 23 23:57:23.383624 ignition[905]: Stage: fetch-offline Jan 23 23:57:23.386611 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:57:23.383660 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:23.383668 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:23.383780 ignition[905]: parsed url from cmdline: "" Jan 23 23:57:23.383783 ignition[905]: no config URL provided Jan 23 23:57:23.410078 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:57:23.383788 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:57:23.383795 ignition[905]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:57:23.383799 ignition[905]: failed to fetch config: resource requires networking Jan 23 23:57:23.383968 ignition[905]: Ignition finished successfully Jan 23 23:57:23.427202 ignition[922]: Ignition 2.19.0 Jan 23 23:57:23.427208 ignition[922]: Stage: fetch Jan 23 23:57:23.427363 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:23.427372 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:23.427452 ignition[922]: parsed url from cmdline: "" Jan 23 23:57:23.427455 ignition[922]: no config URL provided Jan 23 23:57:23.427459 ignition[922]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:57:23.427466 ignition[922]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:57:23.427486 ignition[922]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:57:23.509422 ignition[922]: GET result: OK Jan 23 23:57:23.509511 ignition[922]: config has been read from IMDS userdata Jan 23 23:57:23.509590 ignition[922]: parsing config with SHA512: 80b2b88befd4fcee4f52a98d00f50ece8fbcc17ace48c6695f7c011577a3bbd0f696074d5fa2fe30a46f835a60da3db8b180dffbfb9c8998118b5dfc5164c3a3 Jan 23 23:57:23.513546 unknown[922]: fetched base config from "system" Jan 23 23:57:23.513916 ignition[922]: fetch: fetch complete Jan 23 23:57:23.513554 unknown[922]: fetched base config from "system" Jan 23 23:57:23.513921 ignition[922]: fetch: fetch passed Jan 23 23:57:23.513559 unknown[922]: fetched user config from "azure" Jan 23 23:57:23.513980 ignition[922]: Ignition finished successfully Jan 23 23:57:23.516954 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:57:23.544010 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:57:23.555329 ignition[928]: Ignition 2.19.0 Jan 23 23:57:23.555337 ignition[928]: Stage: kargs Jan 23 23:57:23.559100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:57:23.555492 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:23.555500 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:23.556342 ignition[928]: kargs: kargs passed Jan 23 23:57:23.556385 ignition[928]: Ignition finished successfully Jan 23 23:57:23.579198 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:57:23.600799 ignition[934]: Ignition 2.19.0 Jan 23 23:57:23.600811 ignition[934]: Stage: disks Jan 23 23:57:23.601030 ignition[934]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:23.605096 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:57:23.601040 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:23.613815 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:57:23.602415 ignition[934]: disks: disks passed Jan 23 23:57:23.622248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:57:23.603335 ignition[934]: Ignition finished successfully Jan 23 23:57:23.631874 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:57:23.640665 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:57:23.647767 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:57:23.669177 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:57:23.748975 systemd-fsck[942]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:57:23.757410 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:57:23.772127 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:57:23.798169 systemd-networkd[906]: eth0: Gained IPv6LL Jan 23 23:57:23.823973 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:57:23.824787 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:57:23.828981 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:57:23.871009 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:57:23.889954 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (953) Jan 23 23:57:23.890748 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:57:23.914831 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:23.914854 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:57:23.900538 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:57:23.928833 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:57:23.914114 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:57:23.914145 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:57:23.958253 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:57:23.925042 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:57:23.936148 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:57:23.954628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:57:24.386833 coreos-metadata[955]: Jan 23 23:57:24.386 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:57:24.395567 coreos-metadata[955]: Jan 23 23:57:24.395 INFO Fetch successful Jan 23 23:57:24.395567 coreos-metadata[955]: Jan 23 23:57:24.395 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:57:24.409339 coreos-metadata[955]: Jan 23 23:57:24.407 INFO Fetch successful Jan 23 23:57:24.425108 coreos-metadata[955]: Jan 23 23:57:24.425 INFO wrote hostname ci-4081.3.6-n-95a9bf6543 to /sysroot/etc/hostname Jan 23 23:57:24.426395 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:57:24.595228 initrd-setup-root[982]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:57:24.616528 initrd-setup-root[989]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:57:24.638593 initrd-setup-root[996]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:57:24.645975 initrd-setup-root[1003]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:57:25.901477 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:57:25.913302 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:57:25.921110 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:57:25.939645 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:25.935575 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:57:25.961448 ignition[1070]: INFO : Ignition 2.19.0 Jan 23 23:57:25.966153 ignition[1070]: INFO : Stage: mount Jan 23 23:57:25.966153 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:25.966153 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:25.985252 ignition[1070]: INFO : mount: mount passed Jan 23 23:57:25.985252 ignition[1070]: INFO : Ignition finished successfully Jan 23 23:57:25.970102 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:57:25.987176 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:57:25.998192 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:57:26.016286 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:57:26.036977 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1082) Jan 23 23:57:26.047457 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:57:26.047485 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:57:26.050821 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:57:26.058778 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:57:26.059459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:57:26.086533 ignition[1099]: INFO : Ignition 2.19.0 Jan 23 23:57:26.086533 ignition[1099]: INFO : Stage: files Jan 23 23:57:26.092968 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:26.092968 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:26.092968 ignition[1099]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:57:26.114441 ignition[1099]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:57:26.114441 ignition[1099]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:57:26.202485 ignition[1099]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:57:26.208738 ignition[1099]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:57:26.208738 ignition[1099]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:57:26.202857 unknown[1099]: wrote ssh authorized keys file for user: core Jan 23 23:57:26.229664 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:57:26.238229 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:57:26.273565 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:57:26.405404 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:57:26.414171 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:57:26.801594 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:57:27.040976 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:57:27.040976 ignition[1099]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:57:27.060450 ignition[1099]: INFO : files: files passed Jan 23 23:57:27.060450 ignition[1099]: INFO : Ignition finished successfully Jan 23 23:57:27.055986 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:57:27.079715 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:57:27.089117 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:57:27.105362 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:57:27.157927 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:57:27.157927 initrd-setup-root-after-ignition[1127]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:57:27.105449 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:57:27.179934 initrd-setup-root-after-ignition[1131]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:57:27.143991 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:57:27.154382 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:57:27.182124 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:57:27.213376 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:57:27.214974 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:57:27.223284 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:57:27.232610 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:57:27.240841 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:57:27.257176 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:57:27.270583 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:57:27.286115 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:57:27.304910 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:57:27.315554 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:57:27.320658 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:57:27.329295 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:57:27.329362 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:57:27.342098 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:57:27.351328 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:57:27.360450 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:57:27.369371 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:57:27.378481 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:57:27.387881 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:57:27.396480 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:57:27.406643 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:57:27.416606 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:57:27.424960 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:57:27.432546 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:57:27.432607 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:57:27.444281 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:57:27.454218 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:57:27.463850 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:57:27.463890 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:57:27.473959 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:57:27.474017 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:57:27.488430 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:57:27.488492 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:57:27.497441 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:57:27.497483 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:57:27.505738 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:57:27.505775 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:57:27.530147 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:57:27.553796 ignition[1153]: INFO : Ignition 2.19.0 Jan 23 23:57:27.553796 ignition[1153]: INFO : Stage: umount Jan 23 23:57:27.556127 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:57:27.582711 ignition[1153]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:57:27.582711 ignition[1153]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:57:27.582711 ignition[1153]: INFO : umount: umount passed Jan 23 23:57:27.582711 ignition[1153]: INFO : Ignition finished successfully Jan 23 23:57:27.560270 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:57:27.560334 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:57:27.565646 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:57:27.565683 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:57:27.582492 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:57:27.586013 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:57:27.598573 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:57:27.599047 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:57:27.599150 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:57:27.606827 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:57:27.607209 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:57:27.615273 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:57:27.615320 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:57:27.623334 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:57:27.623372 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:57:27.632020 systemd[1]: Stopped target network.target - Network. Jan 23 23:57:27.640379 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:57:27.640439 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:57:27.649682 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:57:27.658303 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:57:27.661994 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:57:27.668438 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:57:27.676517 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:57:27.684499 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:57:27.684551 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:57:27.697893 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:57:27.697963 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:57:27.707680 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:57:27.707741 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:57:27.715820 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:57:27.715864 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:57:27.725240 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:57:27.729665 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:57:27.741435 systemd-networkd[906]: eth0: DHCPv6 lease lost Jan 23 23:57:27.892714 kernel: hv_netvsc 7ced8d87-92e0-7ced-8d87-92e07ced8d87 eth0: Data path switched from VF: enP28544s1 Jan 23 23:57:27.743136 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:57:27.743309 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:57:27.753802 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:57:27.753840 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:57:27.777104 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:57:27.787600 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:57:27.787668 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:57:27.798087 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:57:27.810218 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:57:27.810335 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:57:27.836330 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:57:27.837467 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:57:27.846633 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:57:27.846704 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:57:27.854390 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:57:27.854426 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:57:27.864720 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:57:27.864767 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:57:27.885109 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:57:27.885160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:57:27.892784 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:57:27.892833 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:57:27.922655 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:57:27.929551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:57:27.929617 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:57:27.937511 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:57:27.937559 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:57:27.947721 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:57:27.947762 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:57:27.958884 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:57:27.958927 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:57:27.972165 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:57:27.972210 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:57:27.980969 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:57:27.981003 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:57:27.991460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:57:27.991495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:28.001256 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:57:28.001353 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:57:28.009545 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:57:28.009625 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:57:28.017867 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:57:28.017952 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:57:28.028386 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:57:28.037127 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:57:28.037200 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:57:28.069217 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:57:28.086482 systemd[1]: Switching root. Jan 23 23:57:28.201733 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 23 23:57:28.201780 systemd-journald[217]: Journal stopped Jan 23 23:57:33.066737 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:57:33.066762 kernel: SELinux: policy capability open_perms=1 Jan 23 23:57:33.066772 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:57:33.066780 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:57:33.066789 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:57:33.066797 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:57:33.066806 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:57:33.066814 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:57:33.066823 kernel: audit: type=1403 audit(1769212649.581:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:57:33.066833 systemd[1]: Successfully loaded SELinux policy in 181.559ms. Jan 23 23:57:33.066846 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.821ms. Jan 23 23:57:33.066857 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:57:33.066866 systemd[1]: Detected virtualization microsoft. Jan 23 23:57:33.066874 systemd[1]: Detected architecture arm64. Jan 23 23:57:33.066884 systemd[1]: Detected first boot. Jan 23 23:57:33.066895 systemd[1]: Hostname set to . Jan 23 23:57:33.066905 systemd[1]: Initializing machine ID from random generator. Jan 23 23:57:33.066914 zram_generator::config[1193]: No configuration found. Jan 23 23:57:33.066924 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:57:33.066933 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:57:33.066950 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:57:33.066961 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:57:33.066973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:57:33.066982 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:57:33.066992 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:57:33.067001 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:57:33.067011 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:57:33.067020 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:57:33.067030 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:57:33.067041 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:57:33.067050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:57:33.067061 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:57:33.067071 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:57:33.067080 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:57:33.067090 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:57:33.067099 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:57:33.067108 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 23:57:33.067119 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:57:33.067129 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:57:33.067138 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:57:33.067150 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:57:33.067160 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:57:33.067170 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:57:33.067179 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:57:33.067189 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:57:33.067200 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:57:33.067210 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:57:33.067219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:57:33.067229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:57:33.067239 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:57:33.067249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:57:33.067260 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:57:33.067271 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:57:33.067280 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:57:33.067290 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:57:33.067300 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:57:33.067310 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:57:33.067319 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:57:33.067331 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:57:33.067341 systemd[1]: Reached target machines.target - Containers. Jan 23 23:57:33.067351 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:57:33.067361 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:57:33.067371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:57:33.067380 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:57:33.067390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:57:33.067400 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:57:33.067411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:57:33.067421 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:57:33.067430 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:57:33.067440 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:57:33.067450 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:57:33.067460 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:57:33.067470 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:57:33.067480 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:57:33.067491 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:57:33.067501 kernel: fuse: init (API version 7.39) Jan 23 23:57:33.067509 kernel: loop: module loaded Jan 23 23:57:33.067518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:57:33.067528 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:57:33.067553 systemd-journald[1289]: Collecting audit messages is disabled. Jan 23 23:57:33.067575 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:57:33.067585 systemd-journald[1289]: Journal started Jan 23 23:57:33.067606 systemd-journald[1289]: Runtime Journal (/run/log/journal/8f67db5eee4f4f8da480b30876402c1e) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:57:32.225309 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:57:32.363739 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 23:57:32.364224 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:57:32.364554 systemd[1]: systemd-journald.service: Consumed 2.462s CPU time. Jan 23 23:57:33.083981 kernel: ACPI: bus type drm_connector registered Jan 23 23:57:33.099137 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:57:33.107306 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:57:33.107360 systemd[1]: Stopped verity-setup.service. Jan 23 23:57:33.120863 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:57:33.121731 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:57:33.128347 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:57:33.133177 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:57:33.138193 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:57:33.143237 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:57:33.148140 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:57:33.153970 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:57:33.159275 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:57:33.164834 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:57:33.164993 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:57:33.170296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:57:33.170428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:57:33.175472 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:57:33.175595 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:57:33.180458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:57:33.180578 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:57:33.186200 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:57:33.186337 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:57:33.191182 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:57:33.191309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:57:33.196169 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:57:33.201301 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:57:33.207094 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:57:33.212733 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:57:33.226529 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:57:33.235019 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:57:33.243067 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:57:33.248289 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:57:33.248327 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:57:33.253990 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:57:33.260428 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:57:33.266481 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:57:33.271191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:57:33.298097 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:57:33.304174 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:57:33.309999 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:57:33.311024 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:57:33.315795 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:57:33.316809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:57:33.327090 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:57:33.335129 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:57:33.341914 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:57:33.356692 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:57:33.365308 systemd-journald[1289]: Time spent on flushing to /var/log/journal/8f67db5eee4f4f8da480b30876402c1e is 12.403ms for 895 entries. Jan 23 23:57:33.365308 systemd-journald[1289]: System Journal (/var/log/journal/8f67db5eee4f4f8da480b30876402c1e) is 8.0M, max 2.6G, 2.6G free. Jan 23 23:57:33.391729 systemd-journald[1289]: Received client request to flush runtime journal. Jan 23 23:57:33.365378 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:57:33.377969 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:57:33.387934 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:57:33.393447 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:57:33.410014 kernel: loop0: detected capacity change from 0 to 114432 Jan 23 23:57:33.410529 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:57:33.420169 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:57:33.425488 udevadm[1330]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:57:33.455651 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 23 23:57:33.455667 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 23 23:57:33.460165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:57:33.470226 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:57:33.480368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:57:33.481028 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:57:33.512109 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:57:33.627083 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:57:33.644128 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:57:33.661074 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Jan 23 23:57:33.661093 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Jan 23 23:57:33.664730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:57:33.788967 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:57:33.824969 kernel: loop1: detected capacity change from 0 to 114328 Jan 23 23:57:34.036982 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:57:34.050101 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:57:34.068937 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jan 23 23:57:34.204966 kernel: loop2: detected capacity change from 0 to 207008 Jan 23 23:57:34.209554 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:57:34.225221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:57:34.269277 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:57:34.292509 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 23:57:34.309971 kernel: loop3: detected capacity change from 0 to 31320 Jan 23 23:57:34.335161 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:57:34.373971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#277 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:57:34.381020 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 23:57:34.441924 kernel: hv_vmbus: registering driver hv_balloon Jan 23 23:57:34.442045 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 23:57:34.442072 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 23:57:34.457771 systemd-networkd[1364]: lo: Link UP Jan 23 23:57:34.458178 systemd-networkd[1364]: lo: Gained carrier Jan 23 23:57:34.460107 systemd-networkd[1364]: Enumeration completed Jan 23 23:57:34.460437 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:57:34.460444 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:57:34.463199 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:34.468554 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:57:34.483101 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 23:57:34.483178 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 23:57:34.486389 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:57:34.490219 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 23:57:34.498930 kernel: Console: switching to colour dummy device 80x25 Jan 23 23:57:34.507030 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:57:34.509257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:57:34.511036 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:34.532213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:34.548400 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1369) Jan 23 23:57:34.548241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:57:34.548594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:34.559972 kernel: mlx5_core 6f80:00:02.0 enP28544s1: Link up Jan 23 23:57:34.569248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:57:34.591296 kernel: hv_netvsc 7ced8d87-92e0-7ced-8d87-92e07ced8d87 eth0: Data path switched to VF: enP28544s1 Jan 23 23:57:34.591115 systemd-networkd[1364]: enP28544s1: Link UP Jan 23 23:57:34.591223 systemd-networkd[1364]: eth0: Link UP Jan 23 23:57:34.591227 systemd-networkd[1364]: eth0: Gained carrier Jan 23 23:57:34.591242 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:57:34.605587 systemd-networkd[1364]: enP28544s1: Gained carrier Jan 23 23:57:34.615187 systemd-networkd[1364]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:57:34.626432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:57:34.642112 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:57:34.698256 kernel: loop4: detected capacity change from 0 to 114432 Jan 23 23:57:34.702926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:57:34.719968 kernel: loop5: detected capacity change from 0 to 114328 Jan 23 23:57:34.738961 kernel: loop6: detected capacity change from 0 to 207008 Jan 23 23:57:34.754971 kernel: loop7: detected capacity change from 0 to 31320 Jan 23 23:57:34.762512 (sd-merge)[1450]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 23:57:34.762953 (sd-merge)[1450]: Merged extensions into '/usr'. Jan 23 23:57:34.766143 systemd[1]: Reloading requested from client PID 1327 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:57:34.766155 systemd[1]: Reloading... Jan 23 23:57:34.826972 zram_generator::config[1482]: No configuration found. Jan 23 23:57:34.957548 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:57:35.034716 systemd[1]: Reloading finished in 268 ms. Jan 23 23:57:35.062403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:57:35.068635 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:57:35.081090 systemd[1]: Starting ensure-sysext.service... Jan 23 23:57:35.092145 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:57:35.100575 systemd[1]: Reloading requested from client PID 1540 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:57:35.100684 systemd[1]: Reloading... Jan 23 23:57:35.126741 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:57:35.127037 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:57:35.127678 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:57:35.127886 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Jan 23 23:57:35.127931 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Jan 23 23:57:35.147252 systemd-tmpfiles[1541]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:57:35.147264 systemd-tmpfiles[1541]: Skipping /boot Jan 23 23:57:35.161081 systemd-tmpfiles[1541]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:57:35.161094 systemd-tmpfiles[1541]: Skipping /boot Jan 23 23:57:35.206992 zram_generator::config[1584]: No configuration found. Jan 23 23:57:35.302592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:57:35.380566 systemd[1]: Reloading finished in 279 ms. Jan 23 23:57:35.399699 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:57:35.409423 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:57:35.430169 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:57:35.437872 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:57:35.449707 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:57:35.459366 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:57:35.467292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:57:35.475257 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:57:35.482727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:57:35.486210 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:57:35.496832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:57:35.504110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:57:35.510935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:57:35.512198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:57:35.512554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:57:35.520282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:57:35.520544 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:57:35.533196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:57:35.535745 lvm[1634]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:57:35.536256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:57:35.551725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:57:35.560289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:57:35.561342 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:57:35.563002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:57:35.570436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:57:35.570582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:57:35.576896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:57:35.577149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:57:35.593036 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:57:35.599690 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:57:35.606268 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:57:35.612531 systemd-resolved[1641]: Positive Trust Anchors: Jan 23 23:57:35.612546 systemd-resolved[1641]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:57:35.612577 systemd-resolved[1641]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:57:35.617463 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:57:35.622909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:57:35.629201 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:57:35.633561 lvm[1666]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:57:35.637126 augenrules[1663]: No rules Jan 23 23:57:35.637491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:57:35.648241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:57:35.649027 systemd-resolved[1641]: Using system hostname 'ci-4081.3.6-n-95a9bf6543'. Jan 23 23:57:35.656247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:57:35.668244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:57:35.672895 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:57:35.673115 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:57:35.678707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:57:35.684491 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:57:35.690427 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:57:35.697542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:57:35.697699 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:57:35.703504 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:57:35.703652 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:57:35.709339 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:57:35.709481 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:57:35.715662 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:57:35.715786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:57:35.724257 systemd[1]: Finished ensure-sysext.service. Jan 23 23:57:35.731703 systemd[1]: Reached target network.target - Network. Jan 23 23:57:35.735710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:57:35.741183 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:57:35.741252 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:57:35.946633 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:57:35.952080 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:57:36.021070 systemd-networkd[1364]: eth0: Gained IPv6LL Jan 23 23:57:36.024487 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:57:36.031245 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:57:38.370838 ldconfig[1322]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:57:38.382021 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:57:38.392073 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:57:38.403169 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:57:38.408094 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:57:38.412562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:57:38.417871 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:57:38.423328 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:57:38.427753 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:57:38.433078 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:57:38.438268 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:57:38.438298 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:57:38.442175 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:57:38.447158 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:57:38.453277 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:57:38.461459 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:57:38.466264 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:57:38.470876 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:57:38.474805 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:57:38.478708 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:57:38.478732 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:57:38.502029 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 23:57:38.508056 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:57:38.521264 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:57:38.526802 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:57:38.532297 (chronyd)[1689]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 23 23:57:38.540614 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:57:38.544969 jq[1695]: false Jan 23 23:57:38.546175 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:57:38.553034 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:57:38.553078 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 23 23:57:38.554377 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 23:57:38.559112 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 23:57:38.561106 chronyd[1701]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 23 23:57:38.563128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:38.563817 KVP[1697]: KVP starting; pid is:1697 Jan 23 23:57:38.571766 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:57:38.577188 KVP[1697]: KVP LIC Version: 3.1 Jan 23 23:57:38.580030 kernel: hv_utils: KVP IC version 4.0 Jan 23 23:57:38.580147 chronyd[1701]: Timezone right/UTC failed leap second check, ignoring Jan 23 23:57:38.580344 chronyd[1701]: Loaded seccomp filter (level 2) Jan 23 23:57:38.583260 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:57:38.589413 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:57:38.595364 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:57:38.605342 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:57:38.610471 extend-filesystems[1696]: Found loop4 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found loop5 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found loop6 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found loop7 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found sda Jan 23 23:57:38.618351 extend-filesystems[1696]: Found sda1 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found sda2 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found sda3 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found usr Jan 23 23:57:38.618351 extend-filesystems[1696]: Found sda4 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found sda6 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found sda7 Jan 23 23:57:38.618351 extend-filesystems[1696]: Found sda9 Jan 23 23:57:38.618351 extend-filesystems[1696]: Checking size of /dev/sda9 Jan 23 23:57:38.733599 dbus-daemon[1692]: [system] SELinux support is enabled Jan 23 23:57:38.804875 extend-filesystems[1696]: Old size kept for /dev/sda9 Jan 23 23:57:38.804875 extend-filesystems[1696]: Found sr0 Jan 23 23:57:38.624182 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:57:38.635405 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:57:38.635974 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:57:38.644232 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:57:38.822341 update_engine[1719]: I20260123 23:57:38.736914 1719 main.cc:92] Flatcar Update Engine starting Jan 23 23:57:38.822341 update_engine[1719]: I20260123 23:57:38.751152 1719 update_check_scheduler.cc:74] Next update check in 3m46s Jan 23 23:57:38.652752 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:57:38.822626 jq[1722]: true Jan 23 23:57:38.671321 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 23:57:38.687066 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:57:38.687240 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:57:38.823005 jq[1734]: true Jan 23 23:57:38.688197 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:57:38.688381 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:57:38.700208 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:57:38.700392 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:57:38.720716 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:57:38.729336 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:57:38.732064 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:57:38.740309 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:57:38.770968 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:57:38.771014 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:57:38.777625 (ntainerd)[1736]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:57:38.788063 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:57:38.788084 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:57:38.793970 systemd-logind[1715]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 23 23:57:38.798259 systemd-logind[1715]: New seat seat0. Jan 23 23:57:38.804520 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:57:38.810569 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:57:38.836587 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:57:38.844342 tar[1732]: linux-arm64/LICENSE Jan 23 23:57:38.844559 tar[1732]: linux-arm64/helm Jan 23 23:57:38.850157 coreos-metadata[1691]: Jan 23 23:57:38.850 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:57:38.853093 coreos-metadata[1691]: Jan 23 23:57:38.853 INFO Fetch successful Jan 23 23:57:38.853241 coreos-metadata[1691]: Jan 23 23:57:38.853 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 23:57:38.858549 coreos-metadata[1691]: Jan 23 23:57:38.858 INFO Fetch successful Jan 23 23:57:38.858871 coreos-metadata[1691]: Jan 23 23:57:38.858 INFO Fetching http://168.63.129.16/machine/71c41e56-37a2-4bb7-a352-c9b4505c1053/d467eaa1%2D0acf%2D4e8c%2D97a7%2D125896de849c.%5Fci%2D4081.3.6%2Dn%2D95a9bf6543?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 23:57:38.860870 coreos-metadata[1691]: Jan 23 23:57:38.860 INFO Fetch successful Jan 23 23:57:38.861332 coreos-metadata[1691]: Jan 23 23:57:38.861 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:57:38.876004 coreos-metadata[1691]: Jan 23 23:57:38.875 INFO Fetch successful Jan 23 23:57:38.936383 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:57:38.949720 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:57:38.958542 bash[1777]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:57:38.969748 locksmithd[1759]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:57:38.970977 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1730) Jan 23 23:57:38.977419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:57:38.988500 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 23:57:39.159153 sshd_keygen[1720]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:57:39.184024 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:57:39.194316 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:57:39.208152 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 23:57:39.221280 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:57:39.223131 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:57:39.239276 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:57:39.263061 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 23:57:39.272899 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:57:39.288140 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:57:39.305278 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 23:57:39.315228 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:57:39.479215 containerd[1736]: time="2026-01-23T23:57:39.478566240Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:57:39.515880 containerd[1736]: time="2026-01-23T23:57:39.515671000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518032 containerd[1736]: time="2026-01-23T23:57:39.517682560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518032 containerd[1736]: time="2026-01-23T23:57:39.517714280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:57:39.518032 containerd[1736]: time="2026-01-23T23:57:39.517729320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:57:39.518032 containerd[1736]: time="2026-01-23T23:57:39.517873920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:57:39.518032 containerd[1736]: time="2026-01-23T23:57:39.517889760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518032 containerd[1736]: time="2026-01-23T23:57:39.517960200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518032 containerd[1736]: time="2026-01-23T23:57:39.517974480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518339 containerd[1736]: time="2026-01-23T23:57:39.518319480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518392 containerd[1736]: time="2026-01-23T23:57:39.518381000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518447 containerd[1736]: time="2026-01-23T23:57:39.518434280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518556 containerd[1736]: time="2026-01-23T23:57:39.518541240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518906 containerd[1736]: time="2026-01-23T23:57:39.518678400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:57:39.518906 containerd[1736]: time="2026-01-23T23:57:39.518874800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:57:39.524712 containerd[1736]: time="2026-01-23T23:57:39.524676440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:57:39.524712 containerd[1736]: time="2026-01-23T23:57:39.524708840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:57:39.525015 containerd[1736]: time="2026-01-23T23:57:39.524859240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:57:39.525015 containerd[1736]: time="2026-01-23T23:57:39.524915840Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:57:39.539510 containerd[1736]: time="2026-01-23T23:57:39.539470680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:57:39.539594 containerd[1736]: time="2026-01-23T23:57:39.539527240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:57:39.539594 containerd[1736]: time="2026-01-23T23:57:39.539546240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:57:39.539594 containerd[1736]: time="2026-01-23T23:57:39.539562440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:57:39.539594 containerd[1736]: time="2026-01-23T23:57:39.539578040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:57:39.540003 containerd[1736]: time="2026-01-23T23:57:39.539743280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:57:39.540067 containerd[1736]: time="2026-01-23T23:57:39.540004920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:57:39.540140 containerd[1736]: time="2026-01-23T23:57:39.540110960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:57:39.540169 containerd[1736]: time="2026-01-23T23:57:39.540133120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:57:39.540199 containerd[1736]: time="2026-01-23T23:57:39.540166120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:57:39.540199 containerd[1736]: time="2026-01-23T23:57:39.540184160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:57:39.540253 containerd[1736]: time="2026-01-23T23:57:39.540196800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:57:39.540253 containerd[1736]: time="2026-01-23T23:57:39.540213520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:57:39.540253 containerd[1736]: time="2026-01-23T23:57:39.540227520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:57:39.540253 containerd[1736]: time="2026-01-23T23:57:39.540242200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540254480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540267080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540278600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540301000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540315360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540328760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540342880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540354800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540367440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540378720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540390800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540407280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540421240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540427 containerd[1736]: time="2026-01-23T23:57:39.540435960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540448600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540461320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540477600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540499040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540511720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540522560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540572680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540589960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540600720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540615600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540626680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540641080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540651640Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:57:39.540924 containerd[1736]: time="2026-01-23T23:57:39.540662480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:57:39.543031 containerd[1736]: time="2026-01-23T23:57:39.540936840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:57:39.543031 containerd[1736]: time="2026-01-23T23:57:39.543032440Z" level=info msg="Connect containerd service" Jan 23 23:57:39.543183 containerd[1736]: time="2026-01-23T23:57:39.543069640Z" level=info msg="using legacy CRI server" Jan 23 23:57:39.543183 containerd[1736]: time="2026-01-23T23:57:39.543077720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:57:39.543183 containerd[1736]: time="2026-01-23T23:57:39.543163560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:57:39.543794 containerd[1736]: time="2026-01-23T23:57:39.543764160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:57:39.544975 containerd[1736]: time="2026-01-23T23:57:39.543932400Z" level=info msg="Start subscribing containerd event" Jan 23 23:57:39.544975 containerd[1736]: time="2026-01-23T23:57:39.543993600Z" level=info msg="Start recovering state" Jan 23 23:57:39.544975 containerd[1736]: time="2026-01-23T23:57:39.544066960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:57:39.544975 containerd[1736]: time="2026-01-23T23:57:39.544068480Z" level=info msg="Start event monitor" Jan 23 23:57:39.544975 containerd[1736]: time="2026-01-23T23:57:39.544104560Z" level=info msg="Start snapshots syncer" Jan 23 23:57:39.544975 containerd[1736]: time="2026-01-23T23:57:39.544115240Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:57:39.544975 containerd[1736]: time="2026-01-23T23:57:39.544122800Z" level=info msg="Start streaming server" Jan 23 23:57:39.544975 containerd[1736]: time="2026-01-23T23:57:39.544106800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:57:39.544975 containerd[1736]: time="2026-01-23T23:57:39.544237560Z" level=info msg="containerd successfully booted in 0.067634s" Jan 23 23:57:39.544580 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:57:39.590344 tar[1732]: linux-arm64/README.md Jan 23 23:57:39.604898 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:57:39.828608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:39.834389 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:57:39.834741 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:57:39.844008 systemd[1]: Startup finished in 594ms (kernel) + 11.504s (initrd) + 10.442s (userspace) = 22.542s. Jan 23 23:57:40.208401 login[1837]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:40.210509 login[1838]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:40.219276 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:57:40.226344 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:57:40.230446 systemd-logind[1715]: New session 1 of user core. Jan 23 23:57:40.234375 systemd-logind[1715]: New session 2 of user core. Jan 23 23:57:40.254415 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:57:40.262197 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:57:40.273085 kubelet[1851]: E0123 23:57:40.273044 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:57:40.275681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:57:40.275821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:57:40.282273 (systemd)[1864]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:57:40.442436 systemd[1864]: Queued start job for default target default.target. Jan 23 23:57:40.453939 systemd[1864]: Created slice app.slice - User Application Slice. Jan 23 23:57:40.453980 systemd[1864]: Reached target paths.target - Paths. Jan 23 23:57:40.453992 systemd[1864]: Reached target timers.target - Timers. Jan 23 23:57:40.455162 systemd[1864]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:57:40.465025 systemd[1864]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:57:40.465080 systemd[1864]: Reached target sockets.target - Sockets. Jan 23 23:57:40.465093 systemd[1864]: Reached target basic.target - Basic System. Jan 23 23:57:40.465133 systemd[1864]: Reached target default.target - Main User Target. Jan 23 23:57:40.465158 systemd[1864]: Startup finished in 177ms. Jan 23 23:57:40.465254 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:57:40.466869 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:57:40.468228 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:57:41.120321 waagent[1835]: 2026-01-23T23:57:41.120237Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 23 23:57:41.124588 waagent[1835]: 2026-01-23T23:57:41.124535Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 23 23:57:41.127896 waagent[1835]: 2026-01-23T23:57:41.127859Z INFO Daemon Daemon Python: 3.11.9 Jan 23 23:57:41.131160 waagent[1835]: 2026-01-23T23:57:41.131117Z INFO Daemon Daemon Run daemon Jan 23 23:57:41.134234 waagent[1835]: 2026-01-23T23:57:41.134196Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 23 23:57:41.141115 waagent[1835]: 2026-01-23T23:57:41.141068Z INFO Daemon Daemon Using waagent for provisioning Jan 23 23:57:41.145055 waagent[1835]: 2026-01-23T23:57:41.144998Z INFO Daemon Daemon Activate resource disk Jan 23 23:57:41.148450 waagent[1835]: 2026-01-23T23:57:41.148410Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 23:57:41.157556 waagent[1835]: 2026-01-23T23:57:41.157505Z INFO Daemon Daemon Found device: None Jan 23 23:57:41.160856 waagent[1835]: 2026-01-23T23:57:41.160820Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 23:57:41.167281 waagent[1835]: 2026-01-23T23:57:41.167247Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 23:57:41.176942 waagent[1835]: 2026-01-23T23:57:41.176898Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:57:41.181225 waagent[1835]: 2026-01-23T23:57:41.181189Z INFO Daemon Daemon Running default provisioning handler Jan 23 23:57:41.191072 waagent[1835]: 2026-01-23T23:57:41.191010Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 23:57:41.201388 waagent[1835]: 2026-01-23T23:57:41.201338Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 23:57:41.208401 waagent[1835]: 2026-01-23T23:57:41.208365Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 23:57:41.212202 waagent[1835]: 2026-01-23T23:57:41.212170Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 23:57:41.322985 waagent[1835]: 2026-01-23T23:57:41.322850Z INFO Daemon Daemon Successfully mounted dvd Jan 23 23:57:41.349515 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 23:57:41.350843 waagent[1835]: 2026-01-23T23:57:41.350775Z INFO Daemon Daemon Detect protocol endpoint Jan 23 23:57:41.354690 waagent[1835]: 2026-01-23T23:57:41.354643Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:57:41.358961 waagent[1835]: 2026-01-23T23:57:41.358916Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 23:57:41.363774 waagent[1835]: 2026-01-23T23:57:41.363738Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 23:57:41.367702 waagent[1835]: 2026-01-23T23:57:41.367664Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 23:57:41.371589 waagent[1835]: 2026-01-23T23:57:41.371525Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 23:57:41.417832 waagent[1835]: 2026-01-23T23:57:41.417792Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 23:57:41.422891 waagent[1835]: 2026-01-23T23:57:41.422868Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 23:57:41.426738 waagent[1835]: 2026-01-23T23:57:41.426707Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 23:57:41.619081 waagent[1835]: 2026-01-23T23:57:41.618992Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 23:57:41.624655 waagent[1835]: 2026-01-23T23:57:41.624559Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 23:57:41.632602 waagent[1835]: 2026-01-23T23:57:41.632551Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:57:41.651719 waagent[1835]: 2026-01-23T23:57:41.651678Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 23:57:41.656354 waagent[1835]: 2026-01-23T23:57:41.656313Z INFO Daemon Jan 23 23:57:41.658498 waagent[1835]: 2026-01-23T23:57:41.658462Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0e958c5f-707b-4896-8a3a-c3d864db446c eTag: 17358760371477842910 source: Fabric] Jan 23 23:57:41.667406 waagent[1835]: 2026-01-23T23:57:41.667366Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 23:57:41.672676 waagent[1835]: 2026-01-23T23:57:41.672636Z INFO Daemon Jan 23 23:57:41.674816 waagent[1835]: 2026-01-23T23:57:41.674782Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:57:41.683698 waagent[1835]: 2026-01-23T23:57:41.683666Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 23:57:41.833472 waagent[1835]: 2026-01-23T23:57:41.833390Z INFO Daemon Downloaded certificate {'thumbprint': '666102BC455F6C13462E801798B5420D6114C177', 'hasPrivateKey': True} Jan 23 23:57:41.841149 waagent[1835]: 2026-01-23T23:57:41.841104Z INFO Daemon Fetch goal state completed Jan 23 23:57:41.851269 waagent[1835]: 2026-01-23T23:57:41.851216Z INFO Daemon Daemon Starting provisioning Jan 23 23:57:41.855125 waagent[1835]: 2026-01-23T23:57:41.855081Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 23:57:41.858704 waagent[1835]: 2026-01-23T23:57:41.858658Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-95a9bf6543] Jan 23 23:57:41.881962 waagent[1835]: 2026-01-23T23:57:41.881155Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-95a9bf6543] Jan 23 23:57:41.886326 waagent[1835]: 2026-01-23T23:57:41.886273Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 23:57:41.891105 waagent[1835]: 2026-01-23T23:57:41.891066Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 23:57:41.934907 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:57:41.934914 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:57:41.934965 systemd-networkd[1364]: eth0: DHCP lease lost Jan 23 23:57:41.936392 waagent[1835]: 2026-01-23T23:57:41.936297Z INFO Daemon Daemon Create user account if not exists Jan 23 23:57:41.940877 waagent[1835]: 2026-01-23T23:57:41.940831Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 23:57:41.941012 systemd-networkd[1364]: eth0: DHCPv6 lease lost Jan 23 23:57:41.945218 waagent[1835]: 2026-01-23T23:57:41.945160Z INFO Daemon Daemon Configure sudoer Jan 23 23:57:41.948745 waagent[1835]: 2026-01-23T23:57:41.948697Z INFO Daemon Daemon Configure sshd Jan 23 23:57:41.952224 waagent[1835]: 2026-01-23T23:57:41.952179Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 23:57:41.962554 waagent[1835]: 2026-01-23T23:57:41.962501Z INFO Daemon Daemon Deploy ssh public key. Jan 23 23:57:41.974065 systemd-networkd[1364]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:57:43.077397 waagent[1835]: 2026-01-23T23:57:43.077346Z INFO Daemon Daemon Provisioning complete Jan 23 23:57:43.094052 waagent[1835]: 2026-01-23T23:57:43.094008Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 23:57:43.099363 waagent[1835]: 2026-01-23T23:57:43.099315Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 23:57:43.107039 waagent[1835]: 2026-01-23T23:57:43.106996Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 23 23:57:43.230756 waagent[1916]: 2026-01-23T23:57:43.230685Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 23 23:57:43.231648 waagent[1916]: 2026-01-23T23:57:43.231058Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 23 23:57:43.231648 waagent[1916]: 2026-01-23T23:57:43.231124Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 23 23:57:43.270971 waagent[1916]: 2026-01-23T23:57:43.270048Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 23 23:57:43.270971 waagent[1916]: 2026-01-23T23:57:43.270271Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:57:43.270971 waagent[1916]: 2026-01-23T23:57:43.270330Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:57:43.278334 waagent[1916]: 2026-01-23T23:57:43.278279Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:57:43.283884 waagent[1916]: 2026-01-23T23:57:43.283849Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 23:57:43.284342 waagent[1916]: 2026-01-23T23:57:43.284303Z INFO ExtHandler Jan 23 23:57:43.284410 waagent[1916]: 2026-01-23T23:57:43.284385Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8750673f-fdbe-4a5b-b8da-35029a9a2974 eTag: 17358760371477842910 source: Fabric] Jan 23 23:57:43.284690 waagent[1916]: 2026-01-23T23:57:43.284654Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:57:43.285267 waagent[1916]: 2026-01-23T23:57:43.285227Z INFO ExtHandler Jan 23 23:57:43.285330 waagent[1916]: 2026-01-23T23:57:43.285305Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:57:43.289139 waagent[1916]: 2026-01-23T23:57:43.289111Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:57:43.362427 waagent[1916]: 2026-01-23T23:57:43.362307Z INFO ExtHandler Downloaded certificate {'thumbprint': '666102BC455F6C13462E801798B5420D6114C177', 'hasPrivateKey': True} Jan 23 23:57:43.362863 waagent[1916]: 2026-01-23T23:57:43.362818Z INFO ExtHandler Fetch goal state completed Jan 23 23:57:43.377211 waagent[1916]: 2026-01-23T23:57:43.377162Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1916 Jan 23 23:57:43.377346 waagent[1916]: 2026-01-23T23:57:43.377314Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 23:57:43.378853 waagent[1916]: 2026-01-23T23:57:43.378812Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 23:57:43.379214 waagent[1916]: 2026-01-23T23:57:43.379179Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 23:57:43.412668 waagent[1916]: 2026-01-23T23:57:43.412623Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 23:57:43.412855 waagent[1916]: 2026-01-23T23:57:43.412819Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 23:57:43.418607 waagent[1916]: 2026-01-23T23:57:43.418574Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 23:57:43.424832 systemd[1]: Reloading requested from client PID 1929 ('systemctl') (unit waagent.service)... Jan 23 23:57:43.424845 systemd[1]: Reloading... Jan 23 23:57:43.492015 zram_generator::config[1960]: No configuration found. Jan 23 23:57:43.605766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:57:43.686782 systemd[1]: Reloading finished in 261 ms. Jan 23 23:57:43.719497 waagent[1916]: 2026-01-23T23:57:43.719137Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 23 23:57:43.724385 systemd[1]: Reloading requested from client PID 2017 ('systemctl') (unit waagent.service)... Jan 23 23:57:43.724397 systemd[1]: Reloading... Jan 23 23:57:43.797970 zram_generator::config[2049]: No configuration found. Jan 23 23:57:43.915641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:57:43.992995 systemd[1]: Reloading finished in 268 ms. Jan 23 23:57:44.016975 waagent[1916]: 2026-01-23T23:57:44.016375Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 23:57:44.016975 waagent[1916]: 2026-01-23T23:57:44.016538Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 23:57:45.036967 waagent[1916]: 2026-01-23T23:57:45.036044Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 23:57:45.036967 waagent[1916]: 2026-01-23T23:57:45.036631Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 23 23:57:45.037808 waagent[1916]: 2026-01-23T23:57:45.037759Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 23:57:45.037939 waagent[1916]: 2026-01-23T23:57:45.037896Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:57:45.038041 waagent[1916]: 2026-01-23T23:57:45.038007Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:57:45.038281 waagent[1916]: 2026-01-23T23:57:45.038240Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 23:57:45.038675 waagent[1916]: 2026-01-23T23:57:45.038626Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 23:57:45.039057 waagent[1916]: 2026-01-23T23:57:45.039000Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 23:57:45.039255 waagent[1916]: 2026-01-23T23:57:45.039215Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 23:57:45.039255 waagent[1916]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 23:57:45.039255 waagent[1916]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 23:57:45.039255 waagent[1916]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 23:57:45.039255 waagent[1916]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:57:45.039255 waagent[1916]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:57:45.039255 waagent[1916]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:57:45.039804 waagent[1916]: 2026-01-23T23:57:45.039721Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 23:57:45.039903 waagent[1916]: 2026-01-23T23:57:45.039860Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:57:45.039989 waagent[1916]: 2026-01-23T23:57:45.039958Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:57:45.040146 waagent[1916]: 2026-01-23T23:57:45.040106Z INFO EnvHandler ExtHandler Configure routes Jan 23 23:57:45.040482 waagent[1916]: 2026-01-23T23:57:45.040238Z INFO EnvHandler ExtHandler Gateway:None Jan 23 23:57:45.040699 waagent[1916]: 2026-01-23T23:57:45.040636Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 23:57:45.040784 waagent[1916]: 2026-01-23T23:57:45.040739Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 23:57:45.040895 waagent[1916]: 2026-01-23T23:57:45.040857Z INFO EnvHandler ExtHandler Routes:None Jan 23 23:57:45.041041 waagent[1916]: 2026-01-23T23:57:45.040961Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 23:57:45.047865 waagent[1916]: 2026-01-23T23:57:45.047812Z INFO ExtHandler ExtHandler Jan 23 23:57:45.048086 waagent[1916]: 2026-01-23T23:57:45.048044Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: cb2e6fb0-cb6e-4507-b3ea-0c26a5b64666 correlation d7740a77-13e7-4b28-acb5-342e59f0a4bb created: 2026-01-23T23:56:41.556236Z] Jan 23 23:57:45.048499 waagent[1916]: 2026-01-23T23:57:45.048462Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:57:45.049165 waagent[1916]: 2026-01-23T23:57:45.049095Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 23 23:57:45.078971 waagent[1916]: 2026-01-23T23:57:45.078890Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BA54B3C2-FA06-47E4-981F-5C2199DB72F5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 23 23:57:45.081388 waagent[1916]: 2026-01-23T23:57:45.081331Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 23:57:45.081388 waagent[1916]: Executing ['ip', '-a', '-o', 'link']: Jan 23 23:57:45.081388 waagent[1916]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 23:57:45.081388 waagent[1916]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:92:e0 brd ff:ff:ff:ff:ff:ff Jan 23 23:57:45.081388 waagent[1916]: 3: enP28544s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:92:e0 brd ff:ff:ff:ff:ff:ff\ altname enP28544p0s2 Jan 23 23:57:45.081388 waagent[1916]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 23:57:45.081388 waagent[1916]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 23:57:45.081388 waagent[1916]: 2: eth0 inet 10.200.20.27/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 23:57:45.081388 waagent[1916]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 23:57:45.081388 waagent[1916]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 23:57:45.081388 waagent[1916]: 2: eth0 inet6 fe80::7eed:8dff:fe87:92e0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 23:57:45.159194 waagent[1916]: 2026-01-23T23:57:45.159129Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 23 23:57:45.159194 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:57:45.159194 waagent[1916]: pkts bytes target prot opt in out source destination Jan 23 23:57:45.159194 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:57:45.159194 waagent[1916]: pkts bytes target prot opt in out source destination Jan 23 23:57:45.159194 waagent[1916]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:57:45.159194 waagent[1916]: pkts bytes target prot opt in out source destination Jan 23 23:57:45.159194 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:57:45.159194 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:57:45.159194 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:57:45.162137 waagent[1916]: 2026-01-23T23:57:45.162083Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 23:57:45.162137 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:57:45.162137 waagent[1916]: pkts bytes target prot opt in out source destination Jan 23 23:57:45.162137 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:57:45.162137 waagent[1916]: pkts bytes target prot opt in out source destination Jan 23 23:57:45.162137 waagent[1916]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:57:45.162137 waagent[1916]: pkts bytes target prot opt in out source destination Jan 23 23:57:45.162137 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:57:45.162137 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:57:45.162137 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:57:45.162363 waagent[1916]: 2026-01-23T23:57:45.162330Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 23:57:50.526440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:57:50.536105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:50.643394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:50.647701 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:57:50.732778 kubelet[2144]: E0123 23:57:50.732713 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:57:50.735675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:57:50.735798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:57:57.846864 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:57:57.848026 systemd[1]: Started sshd@0-10.200.20.27:22-10.200.16.10:43528.service - OpenSSH per-connection server daemon (10.200.16.10:43528). Jan 23 23:57:58.337295 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 43528 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:57:58.338520 sshd[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:58.343110 systemd-logind[1715]: New session 3 of user core. Jan 23 23:57:58.352237 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:57:58.741735 systemd[1]: Started sshd@1-10.200.20.27:22-10.200.16.10:43530.service - OpenSSH per-connection server daemon (10.200.16.10:43530). Jan 23 23:57:59.193088 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 43530 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:57:59.194336 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:59.198891 systemd-logind[1715]: New session 4 of user core. Jan 23 23:57:59.202164 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:57:59.522080 sshd[2157]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:59.525558 systemd[1]: sshd@1-10.200.20.27:22-10.200.16.10:43530.service: Deactivated successfully. Jan 23 23:57:59.527085 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:57:59.527669 systemd-logind[1715]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:57:59.528445 systemd-logind[1715]: Removed session 4. Jan 23 23:57:59.603378 systemd[1]: Started sshd@2-10.200.20.27:22-10.200.16.10:51286.service - OpenSSH per-connection server daemon (10.200.16.10:51286). Jan 23 23:58:00.055122 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 51286 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:00.056402 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:00.060987 systemd-logind[1715]: New session 5 of user core. Jan 23 23:58:00.067099 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:58:00.379920 sshd[2164]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:00.383627 systemd[1]: sshd@2-10.200.20.27:22-10.200.16.10:51286.service: Deactivated successfully. Jan 23 23:58:00.385386 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:58:00.386260 systemd-logind[1715]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:58:00.387038 systemd-logind[1715]: Removed session 5. Jan 23 23:58:00.472691 systemd[1]: Started sshd@3-10.200.20.27:22-10.200.16.10:51298.service - OpenSSH per-connection server daemon (10.200.16.10:51298). Jan 23 23:58:00.873460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:58:00.884196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:00.960916 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 51298 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:00.962661 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:00.973817 systemd-logind[1715]: New session 6 of user core. Jan 23 23:58:00.974734 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:58:00.992360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:00.996937 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:58:01.030548 kubelet[2182]: E0123 23:58:01.030505 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:58:01.033262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:58:01.033414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:58:01.314159 sshd[2171]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:01.316767 systemd[1]: sshd@3-10.200.20.27:22-10.200.16.10:51298.service: Deactivated successfully. Jan 23 23:58:01.318646 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:58:01.320316 systemd-logind[1715]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:58:01.321248 systemd-logind[1715]: Removed session 6. Jan 23 23:58:01.400882 systemd[1]: Started sshd@4-10.200.20.27:22-10.200.16.10:51310.service - OpenSSH per-connection server daemon (10.200.16.10:51310). Jan 23 23:58:01.887768 sshd[2193]: Accepted publickey for core from 10.200.16.10 port 51310 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:01.889114 sshd[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:01.892910 systemd-logind[1715]: New session 7 of user core. Jan 23 23:58:01.900102 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:58:02.372625 chronyd[1701]: Selected source PHC0 Jan 23 23:58:02.416737 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:58:02.417027 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:58:02.433077 sudo[2196]: pam_unix(sudo:session): session closed for user root Jan 23 23:58:02.510703 sshd[2193]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:02.514350 systemd-logind[1715]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:58:02.515408 systemd[1]: sshd@4-10.200.20.27:22-10.200.16.10:51310.service: Deactivated successfully. Jan 23 23:58:02.518250 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:58:02.519074 systemd-logind[1715]: Removed session 7. Jan 23 23:58:02.602411 systemd[1]: Started sshd@5-10.200.20.27:22-10.200.16.10:51324.service - OpenSSH per-connection server daemon (10.200.16.10:51324). Jan 23 23:58:03.087983 sshd[2201]: Accepted publickey for core from 10.200.16.10 port 51324 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:03.089334 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:03.092841 systemd-logind[1715]: New session 8 of user core. Jan 23 23:58:03.100079 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:58:03.362493 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:58:03.362963 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:58:03.365767 sudo[2205]: pam_unix(sudo:session): session closed for user root Jan 23 23:58:03.370196 sudo[2204]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:58:03.370445 sudo[2204]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:58:03.381165 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:58:03.383882 auditctl[2208]: No rules Jan 23 23:58:03.384262 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:58:03.384432 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:58:03.386733 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:58:03.408249 augenrules[2226]: No rules Jan 23 23:58:03.409744 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:58:03.410839 sudo[2204]: pam_unix(sudo:session): session closed for user root Jan 23 23:58:03.487973 sshd[2201]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:03.490736 systemd[1]: sshd@5-10.200.20.27:22-10.200.16.10:51324.service: Deactivated successfully. Jan 23 23:58:03.492314 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:58:03.493743 systemd-logind[1715]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:58:03.494555 systemd-logind[1715]: Removed session 8. Jan 23 23:58:03.556672 systemd[1]: Started sshd@6-10.200.20.27:22-10.200.16.10:51334.service - OpenSSH per-connection server daemon (10.200.16.10:51334). Jan 23 23:58:03.965466 sshd[2234]: Accepted publickey for core from 10.200.16.10 port 51334 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:58:03.966721 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:03.971095 systemd-logind[1715]: New session 9 of user core. Jan 23 23:58:03.977102 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:58:04.201887 sudo[2237]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:58:04.202173 sudo[2237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:58:05.251358 (dockerd)[2252]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:58:05.251720 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:58:06.014209 dockerd[2252]: time="2026-01-23T23:58:06.014154433Z" level=info msg="Starting up" Jan 23 23:58:06.305280 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1969639655-merged.mount: Deactivated successfully. Jan 23 23:58:06.349796 dockerd[2252]: time="2026-01-23T23:58:06.349754562Z" level=info msg="Loading containers: start." Jan 23 23:58:06.507989 kernel: Initializing XFRM netlink socket Jan 23 23:58:06.670876 systemd-networkd[1364]: docker0: Link UP Jan 23 23:58:06.697217 dockerd[2252]: time="2026-01-23T23:58:06.696902571Z" level=info msg="Loading containers: done." Jan 23 23:58:06.718856 dockerd[2252]: time="2026-01-23T23:58:06.718815892Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:58:06.719469 dockerd[2252]: time="2026-01-23T23:58:06.719168532Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:58:06.719469 dockerd[2252]: time="2026-01-23T23:58:06.719283492Z" level=info msg="Daemon has completed initialization" Jan 23 23:58:06.772006 dockerd[2252]: time="2026-01-23T23:58:06.771937613Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:58:06.772538 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:58:07.300961 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3503314400-merged.mount: Deactivated successfully. Jan 23 23:58:07.538828 containerd[1736]: time="2026-01-23T23:58:07.538534154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 23:58:08.370021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188109997.mount: Deactivated successfully. Jan 23 23:58:09.576984 containerd[1736]: time="2026-01-23T23:58:09.575915088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:09.580998 containerd[1736]: time="2026-01-23T23:58:09.580966088Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 23:58:09.586072 containerd[1736]: time="2026-01-23T23:58:09.586028329Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:09.591355 containerd[1736]: time="2026-01-23T23:58:09.591304649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:09.592847 containerd[1736]: time="2026-01-23T23:58:09.592346129Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.053771775s" Jan 23 23:58:09.592847 containerd[1736]: time="2026-01-23T23:58:09.592382889Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 23:58:09.592957 containerd[1736]: time="2026-01-23T23:58:09.592908769Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 23:58:10.800028 containerd[1736]: time="2026-01-23T23:58:10.799087678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:10.802896 containerd[1736]: time="2026-01-23T23:58:10.802688639Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 23:58:10.805701 containerd[1736]: time="2026-01-23T23:58:10.805181639Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:10.809694 containerd[1736]: time="2026-01-23T23:58:10.809650760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:10.810774 containerd[1736]: time="2026-01-23T23:58:10.810746000Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.217810231s" Jan 23 23:58:10.810866 containerd[1736]: time="2026-01-23T23:58:10.810851840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 23:58:10.811351 containerd[1736]: time="2026-01-23T23:58:10.811321600Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 23:58:11.283768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:58:11.292115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:11.399410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:11.406504 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:58:11.474967 kubelet[2457]: E0123 23:58:11.473821 2457 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:58:11.477261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:58:11.477402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:58:12.123356 containerd[1736]: time="2026-01-23T23:58:12.123305959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:12.126468 containerd[1736]: time="2026-01-23T23:58:12.126439960Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 23:58:12.129110 containerd[1736]: time="2026-01-23T23:58:12.129063440Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:12.134054 containerd[1736]: time="2026-01-23T23:58:12.133680001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:12.134757 containerd[1736]: time="2026-01-23T23:58:12.134728961Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.323370001s" Jan 23 23:58:12.134811 containerd[1736]: time="2026-01-23T23:58:12.134758001Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 23:58:12.135477 containerd[1736]: time="2026-01-23T23:58:12.135456841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 23:58:13.739934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424718876.mount: Deactivated successfully. Jan 23 23:58:14.071310 containerd[1736]: time="2026-01-23T23:58:14.071259212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:14.078973 containerd[1736]: time="2026-01-23T23:58:14.078760054Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 23:58:14.081060 containerd[1736]: time="2026-01-23T23:58:14.081036054Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:14.085133 containerd[1736]: time="2026-01-23T23:58:14.085039735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:14.086072 containerd[1736]: time="2026-01-23T23:58:14.085551055Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.949991254s" Jan 23 23:58:14.086072 containerd[1736]: time="2026-01-23T23:58:14.085581255Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 23:58:14.086072 containerd[1736]: time="2026-01-23T23:58:14.085981935Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 23:58:14.759212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990507173.mount: Deactivated successfully. Jan 23 23:58:15.712969 containerd[1736]: time="2026-01-23T23:58:15.712020966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:15.714253 containerd[1736]: time="2026-01-23T23:58:15.714226287Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 23:58:15.717400 containerd[1736]: time="2026-01-23T23:58:15.717355487Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:15.723041 containerd[1736]: time="2026-01-23T23:58:15.722993687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:15.724080 containerd[1736]: time="2026-01-23T23:58:15.724054807Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.638048392s" Jan 23 23:58:15.724230 containerd[1736]: time="2026-01-23T23:58:15.724143647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 23:58:15.724678 containerd[1736]: time="2026-01-23T23:58:15.724654087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:58:16.276395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642186647.mount: Deactivated successfully. Jan 23 23:58:16.296324 containerd[1736]: time="2026-01-23T23:58:16.296276080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:16.298255 containerd[1736]: time="2026-01-23T23:58:16.298096120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:58:16.300746 containerd[1736]: time="2026-01-23T23:58:16.300701680Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:16.304716 containerd[1736]: time="2026-01-23T23:58:16.304668680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:16.306284 containerd[1736]: time="2026-01-23T23:58:16.305339640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 580.655713ms" Jan 23 23:58:16.306284 containerd[1736]: time="2026-01-23T23:58:16.305369600Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:58:16.306417 containerd[1736]: time="2026-01-23T23:58:16.306315840Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 23:58:16.935875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227564884.mount: Deactivated successfully. Jan 23 23:58:18.796981 containerd[1736]: time="2026-01-23T23:58:18.796341376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:18.798622 containerd[1736]: time="2026-01-23T23:58:18.798349255Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 23:58:18.800960 containerd[1736]: time="2026-01-23T23:58:18.800921735Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:18.806164 containerd[1736]: time="2026-01-23T23:58:18.806102654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:18.807710 containerd[1736]: time="2026-01-23T23:58:18.807425094Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.501085334s" Jan 23 23:58:18.807710 containerd[1736]: time="2026-01-23T23:58:18.807463534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 23:58:21.491493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 23:58:21.501304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:21.711085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:21.715903 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:58:21.758214 kubelet[2614]: E0123 23:58:21.757124 2614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:58:21.761084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:58:21.761254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:58:22.541477 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 23:58:23.266064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:23.274161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:23.301664 systemd[1]: Reloading requested from client PID 2628 ('systemctl') (unit session-9.scope)... Jan 23 23:58:23.301682 systemd[1]: Reloading... Jan 23 23:58:23.405161 zram_generator::config[2677]: No configuration found. Jan 23 23:58:23.487414 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:58:23.570039 systemd[1]: Reloading finished in 267 ms. Jan 23 23:58:23.603590 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:58:23.603819 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:58:23.604212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:23.610225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:24.257309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:24.267186 (kubelet)[2731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:58:24.302897 kubelet[2731]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:58:24.302897 kubelet[2731]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:58:24.302897 kubelet[2731]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:58:24.303274 kubelet[2731]: I0123 23:58:24.302978 2731 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:58:24.321494 update_engine[1719]: I20260123 23:58:24.320920 1719 update_attempter.cc:509] Updating boot flags... Jan 23 23:58:24.453296 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2750) Jan 23 23:58:24.537041 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2752) Jan 23 23:58:24.652112 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2752) Jan 23 23:58:24.947120 kubelet[2731]: I0123 23:58:24.947005 2731 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:58:24.947249 kubelet[2731]: I0123 23:58:24.947238 2731 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:58:24.947583 kubelet[2731]: I0123 23:58:24.947565 2731 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:58:24.975104 kubelet[2731]: E0123 23:58:24.975066 2731 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:24.976044 kubelet[2731]: I0123 23:58:24.976007 2731 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:58:24.981797 kubelet[2731]: E0123 23:58:24.981764 2731 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:58:24.981797 kubelet[2731]: I0123 23:58:24.981793 2731 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:58:24.984491 kubelet[2731]: I0123 23:58:24.984469 2731 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:58:24.986092 kubelet[2731]: I0123 23:58:24.986055 2731 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:58:24.986258 kubelet[2731]: I0123 23:58:24.986094 2731 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-95a9bf6543","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:58:24.986345 kubelet[2731]: I0123 23:58:24.986268 2731 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:58:24.986345 kubelet[2731]: I0123 23:58:24.986277 2731 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:58:24.986420 kubelet[2731]: I0123 23:58:24.986404 2731 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:58:24.989194 kubelet[2731]: I0123 23:58:24.989175 2731 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:58:24.989235 kubelet[2731]: I0123 23:58:24.989204 2731 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:58:24.989235 kubelet[2731]: I0123 23:58:24.989222 2731 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:58:24.989235 kubelet[2731]: I0123 23:58:24.989235 2731 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:58:24.993648 kubelet[2731]: W0123 23:58:24.993601 2731 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jan 23 23:58:24.993726 kubelet[2731]: E0123 23:58:24.993661 2731 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:24.993936 kubelet[2731]: W0123 23:58:24.993901 2731 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-95a9bf6543&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jan 23 23:58:24.993990 kubelet[2731]: E0123 23:58:24.993939 2731 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-95a9bf6543&limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:24.994288 kubelet[2731]: I0123 23:58:24.994265 2731 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:58:24.994724 kubelet[2731]: I0123 23:58:24.994704 2731 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:58:24.994769 kubelet[2731]: W0123 23:58:24.994757 2731 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:58:24.996203 kubelet[2731]: I0123 23:58:24.996183 2731 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:58:24.996280 kubelet[2731]: I0123 23:58:24.996216 2731 server.go:1287] "Started kubelet" Jan 23 23:58:24.998224 kubelet[2731]: I0123 23:58:24.998188 2731 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:58:24.998532 kubelet[2731]: I0123 23:58:24.998506 2731 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:58:25.000157 kubelet[2731]: I0123 23:58:24.999558 2731 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:58:25.000381 kubelet[2731]: I0123 23:58:25.000362 2731 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:58:25.000459 kubelet[2731]: I0123 23:58:25.000447 2731 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:58:25.000674 kubelet[2731]: E0123 23:58:25.000654 2731 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" Jan 23 23:58:25.007084 kubelet[2731]: I0123 23:58:25.007060 2731 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:58:25.009314 kubelet[2731]: I0123 23:58:25.008171 2731 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:58:25.009314 kubelet[2731]: I0123 23:58:25.008237 2731 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:58:25.009528 kubelet[2731]: I0123 23:58:25.009505 2731 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:58:25.011338 kubelet[2731]: W0123 23:58:25.011293 2731 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jan 23 23:58:25.011464 kubelet[2731]: E0123 23:58:25.011445 2731 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:25.011800 kubelet[2731]: I0123 23:58:25.011779 2731 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:58:25.011967 kubelet[2731]: I0123 23:58:25.011936 2731 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:58:25.012307 kubelet[2731]: E0123 23:58:25.012021 2731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.27:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.27:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-95a9bf6543.188d8194dd680df1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-95a9bf6543,UID:ci-4081.3.6-n-95a9bf6543,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-95a9bf6543,},FirstTimestamp:2026-01-23 23:58:24.996199921 +0000 UTC m=+0.726164539,LastTimestamp:2026-01-23 23:58:24.996199921 +0000 UTC m=+0.726164539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-95a9bf6543,}" Jan 23 23:58:25.012407 kubelet[2731]: E0123 23:58:25.012367 2731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-95a9bf6543?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="200ms" Jan 23 23:58:25.013722 kubelet[2731]: I0123 23:58:25.013701 2731 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:58:25.032058 kubelet[2731]: E0123 23:58:25.032021 2731 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:58:25.100818 kubelet[2731]: E0123 23:58:25.100762 2731 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" Jan 23 23:58:25.104185 kubelet[2731]: I0123 23:58:25.104156 2731 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:58:25.104185 kubelet[2731]: I0123 23:58:25.104173 2731 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:58:25.104185 kubelet[2731]: I0123 23:58:25.104190 2731 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:58:25.201898 kubelet[2731]: E0123 23:58:25.201810 2731 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" Jan 23 23:58:25.213464 kubelet[2731]: E0123 23:58:25.213420 2731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-95a9bf6543?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="400ms" Jan 23 23:58:25.302275 kubelet[2731]: E0123 23:58:25.302244 2731 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" Jan 23 23:58:25.319317 kubelet[2731]: I0123 23:58:25.319284 2731 policy_none.go:49] "None policy: Start" Jan 23 23:58:25.319317 kubelet[2731]: I0123 23:58:25.319317 2731 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:58:25.319650 kubelet[2731]: I0123 23:58:25.319335 2731 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:58:25.327343 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:58:25.343919 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:58:25.356677 kubelet[2731]: I0123 23:58:25.356500 2731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:58:25.357107 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:58:25.358528 kubelet[2731]: I0123 23:58:25.358495 2731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:58:25.358528 kubelet[2731]: I0123 23:58:25.358520 2731 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:58:25.358632 kubelet[2731]: I0123 23:58:25.358539 2731 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:58:25.358632 kubelet[2731]: I0123 23:58:25.358545 2731 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:58:25.358632 kubelet[2731]: E0123 23:58:25.358587 2731 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:58:25.361526 kubelet[2731]: W0123 23:58:25.360818 2731 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jan 23 23:58:25.361526 kubelet[2731]: E0123 23:58:25.360929 2731 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:25.361526 kubelet[2731]: I0123 23:58:25.361397 2731 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:58:25.362353 kubelet[2731]: I0123 23:58:25.362329 2731 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:58:25.362423 kubelet[2731]: I0123 23:58:25.362348 2731 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:58:25.362775 kubelet[2731]: I0123 23:58:25.362760 2731 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:58:25.364463 kubelet[2731]: E0123 23:58:25.364442 2731 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:58:25.364520 kubelet[2731]: E0123 23:58:25.364481 2731 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-95a9bf6543\" not found" Jan 23 23:58:25.464696 kubelet[2731]: I0123 23:58:25.463868 2731 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.464696 kubelet[2731]: E0123 23:58:25.464351 2731 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.468984 systemd[1]: Created slice kubepods-burstable-podc30a4a50a934bba493952f11af0395c1.slice - libcontainer container kubepods-burstable-podc30a4a50a934bba493952f11af0395c1.slice. Jan 23 23:58:25.479785 kubelet[2731]: E0123 23:58:25.479762 2731 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.483058 systemd[1]: Created slice kubepods-burstable-pod0b65863bac2e4604abf90512d2ef76f1.slice - libcontainer container kubepods-burstable-pod0b65863bac2e4604abf90512d2ef76f1.slice. Jan 23 23:58:25.484783 kubelet[2731]: E0123 23:58:25.484763 2731 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.487695 systemd[1]: Created slice kubepods-burstable-poded21faeef6d83c63012b4ed351a6d55d.slice - libcontainer container kubepods-burstable-poded21faeef6d83c63012b4ed351a6d55d.slice. Jan 23 23:58:25.489236 kubelet[2731]: E0123 23:58:25.489089 2731 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.511036 kubelet[2731]: I0123 23:58:25.511013 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.511238 kubelet[2731]: I0123 23:58:25.511175 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.511238 kubelet[2731]: I0123 23:58:25.511199 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed21faeef6d83c63012b4ed351a6d55d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-95a9bf6543\" (UID: \"ed21faeef6d83c63012b4ed351a6d55d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.511405 kubelet[2731]: I0123 23:58:25.511215 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c30a4a50a934bba493952f11af0395c1-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-95a9bf6543\" (UID: \"c30a4a50a934bba493952f11af0395c1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.511405 kubelet[2731]: I0123 23:58:25.511335 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c30a4a50a934bba493952f11af0395c1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-95a9bf6543\" (UID: \"c30a4a50a934bba493952f11af0395c1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.511405 kubelet[2731]: I0123 23:58:25.511354 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.511405 kubelet[2731]: I0123 23:58:25.511368 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c30a4a50a934bba493952f11af0395c1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-95a9bf6543\" (UID: \"c30a4a50a934bba493952f11af0395c1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.511611 kubelet[2731]: I0123 23:58:25.511537 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.511611 kubelet[2731]: I0123 23:58:25.511562 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.614416 kubelet[2731]: E0123 23:58:25.614380 2731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-95a9bf6543?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="800ms" Jan 23 23:58:25.666803 kubelet[2731]: I0123 23:58:25.666764 2731 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.667165 kubelet[2731]: E0123 23:58:25.667133 2731 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:25.781461 containerd[1736]: time="2026-01-23T23:58:25.781424500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-95a9bf6543,Uid:c30a4a50a934bba493952f11af0395c1,Namespace:kube-system,Attempt:0,}" Jan 23 23:58:25.786595 containerd[1736]: time="2026-01-23T23:58:25.786365820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-95a9bf6543,Uid:0b65863bac2e4604abf90512d2ef76f1,Namespace:kube-system,Attempt:0,}" Jan 23 23:58:25.790378 containerd[1736]: time="2026-01-23T23:58:25.790348141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-95a9bf6543,Uid:ed21faeef6d83c63012b4ed351a6d55d,Namespace:kube-system,Attempt:0,}" Jan 23 23:58:26.069596 kubelet[2731]: I0123 23:58:26.069497 2731 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:26.070037 kubelet[2731]: E0123 23:58:26.070004 2731 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:26.182919 kubelet[2731]: W0123 23:58:26.182841 2731 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jan 23 23:58:26.182919 kubelet[2731]: E0123 23:58:26.182884 2731 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:26.193497 kubelet[2731]: W0123 23:58:26.193411 2731 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jan 23 23:58:26.193497 kubelet[2731]: E0123 23:58:26.193472 2731 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:26.382113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279134673.mount: Deactivated successfully. Jan 23 23:58:26.408984 containerd[1736]: time="2026-01-23T23:58:26.408368037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:58:26.414337 containerd[1736]: time="2026-01-23T23:58:26.414286161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:58:26.415775 kubelet[2731]: E0123 23:58:26.415737 2731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-95a9bf6543?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="1.6s" Jan 23 23:58:26.417094 containerd[1736]: time="2026-01-23T23:58:26.417060242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:58:26.421254 containerd[1736]: time="2026-01-23T23:58:26.420535285Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:58:26.423377 containerd[1736]: time="2026-01-23T23:58:26.423347206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:58:26.427961 containerd[1736]: time="2026-01-23T23:58:26.427037449Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:58:26.429962 containerd[1736]: time="2026-01-23T23:58:26.429918930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:58:26.434040 containerd[1736]: time="2026-01-23T23:58:26.434003533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:58:26.434832 containerd[1736]: time="2026-01-23T23:58:26.434807253Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 648.379633ms" Jan 23 23:58:26.437173 containerd[1736]: time="2026-01-23T23:58:26.437134775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 646.729514ms" Jan 23 23:58:26.437841 containerd[1736]: time="2026-01-23T23:58:26.437814095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 656.315675ms" Jan 23 23:58:26.571338 kubelet[2731]: W0123 23:58:26.571246 2731 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-95a9bf6543&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jan 23 23:58:26.571338 kubelet[2731]: E0123 23:58:26.571308 2731 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-95a9bf6543&limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:26.740332 kubelet[2731]: W0123 23:58:26.740248 2731 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jan 23 23:58:26.740332 kubelet[2731]: E0123 23:58:26.740286 2731 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:26.871866 kubelet[2731]: I0123 23:58:26.871834 2731 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:26.872174 kubelet[2731]: E0123 23:58:26.872150 2731 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:27.036778 containerd[1736]: time="2026-01-23T23:58:27.035328748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:27.036778 containerd[1736]: time="2026-01-23T23:58:27.035375108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:27.036778 containerd[1736]: time="2026-01-23T23:58:27.035384868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:27.036778 containerd[1736]: time="2026-01-23T23:58:27.035455948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:27.037677 containerd[1736]: time="2026-01-23T23:58:27.037484789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:27.037677 containerd[1736]: time="2026-01-23T23:58:27.037530110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:27.037677 containerd[1736]: time="2026-01-23T23:58:27.037546030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:27.037677 containerd[1736]: time="2026-01-23T23:58:27.037621150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:27.039950 containerd[1736]: time="2026-01-23T23:58:27.039673671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:27.039950 containerd[1736]: time="2026-01-23T23:58:27.039709431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:27.039950 containerd[1736]: time="2026-01-23T23:58:27.039719431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:27.039950 containerd[1736]: time="2026-01-23T23:58:27.039773951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:27.061107 systemd[1]: Started cri-containerd-6390147bc95994f6fa6e1626ddb46959005ff0a008c9f7aa2b5995baed09ef2f.scope - libcontainer container 6390147bc95994f6fa6e1626ddb46959005ff0a008c9f7aa2b5995baed09ef2f. Jan 23 23:58:27.062714 systemd[1]: Started cri-containerd-fbbcb26ac831035fb2d2b9a8468cf35882f5451d803af0a885f29f92cf3c12ef.scope - libcontainer container fbbcb26ac831035fb2d2b9a8468cf35882f5451d803af0a885f29f92cf3c12ef. Jan 23 23:58:27.066883 systemd[1]: Started cri-containerd-902621568d947a4b07553d295665d5e129834892fe0138b1a49d2cc38c3e5714.scope - libcontainer container 902621568d947a4b07553d295665d5e129834892fe0138b1a49d2cc38c3e5714. Jan 23 23:58:27.115695 containerd[1736]: time="2026-01-23T23:58:27.115182518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-95a9bf6543,Uid:0b65863bac2e4604abf90512d2ef76f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6390147bc95994f6fa6e1626ddb46959005ff0a008c9f7aa2b5995baed09ef2f\"" Jan 23 23:58:27.116367 containerd[1736]: time="2026-01-23T23:58:27.116297599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-95a9bf6543,Uid:c30a4a50a934bba493952f11af0395c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbbcb26ac831035fb2d2b9a8468cf35882f5451d803af0a885f29f92cf3c12ef\"" Jan 23 23:58:27.123503 containerd[1736]: time="2026-01-23T23:58:27.122090602Z" level=info msg="CreateContainer within sandbox \"fbbcb26ac831035fb2d2b9a8468cf35882f5451d803af0a885f29f92cf3c12ef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:58:27.123503 containerd[1736]: time="2026-01-23T23:58:27.123424123Z" level=info msg="CreateContainer within sandbox \"6390147bc95994f6fa6e1626ddb46959005ff0a008c9f7aa2b5995baed09ef2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:58:27.123604 kubelet[2731]: E0123 23:58:27.123286 2731 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.27:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:58:27.133333 containerd[1736]: time="2026-01-23T23:58:27.133303769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-95a9bf6543,Uid:ed21faeef6d83c63012b4ed351a6d55d,Namespace:kube-system,Attempt:0,} returns sandbox id \"902621568d947a4b07553d295665d5e129834892fe0138b1a49d2cc38c3e5714\"" Jan 23 23:58:27.135733 containerd[1736]: time="2026-01-23T23:58:27.135707651Z" level=info msg="CreateContainer within sandbox \"902621568d947a4b07553d295665d5e129834892fe0138b1a49d2cc38c3e5714\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:58:27.208821 containerd[1736]: time="2026-01-23T23:58:27.208775776Z" level=info msg="CreateContainer within sandbox \"6390147bc95994f6fa6e1626ddb46959005ff0a008c9f7aa2b5995baed09ef2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eb6c274e5d173aedf0161d835f6bbfe30f5c503db66a7981f59bfe4ac6316b92\"" Jan 23 23:58:27.209462 containerd[1736]: time="2026-01-23T23:58:27.209437057Z" level=info msg="StartContainer for \"eb6c274e5d173aedf0161d835f6bbfe30f5c503db66a7981f59bfe4ac6316b92\"" Jan 23 23:58:27.217622 containerd[1736]: time="2026-01-23T23:58:27.217520262Z" level=info msg="CreateContainer within sandbox \"902621568d947a4b07553d295665d5e129834892fe0138b1a49d2cc38c3e5714\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"018e33020e17c92f3fd42e79fe08f98f391f449dd02416b2d51bbe58925c56a2\"" Jan 23 23:58:27.218075 containerd[1736]: time="2026-01-23T23:58:27.217933822Z" level=info msg="StartContainer for \"018e33020e17c92f3fd42e79fe08f98f391f449dd02416b2d51bbe58925c56a2\"" Jan 23 23:58:27.222490 containerd[1736]: time="2026-01-23T23:58:27.222454025Z" level=info msg="CreateContainer within sandbox \"fbbcb26ac831035fb2d2b9a8468cf35882f5451d803af0a885f29f92cf3c12ef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bd50f95477f7ae92a736a64103625e3c3b6b8eff1dfb6871204fee2b3ab959c9\"" Jan 23 23:58:27.224971 containerd[1736]: time="2026-01-23T23:58:27.222960425Z" level=info msg="StartContainer for \"bd50f95477f7ae92a736a64103625e3c3b6b8eff1dfb6871204fee2b3ab959c9\"" Jan 23 23:58:27.237612 systemd[1]: Started cri-containerd-eb6c274e5d173aedf0161d835f6bbfe30f5c503db66a7981f59bfe4ac6316b92.scope - libcontainer container eb6c274e5d173aedf0161d835f6bbfe30f5c503db66a7981f59bfe4ac6316b92. Jan 23 23:58:27.259100 systemd[1]: Started cri-containerd-018e33020e17c92f3fd42e79fe08f98f391f449dd02416b2d51bbe58925c56a2.scope - libcontainer container 018e33020e17c92f3fd42e79fe08f98f391f449dd02416b2d51bbe58925c56a2. Jan 23 23:58:27.263453 systemd[1]: Started cri-containerd-bd50f95477f7ae92a736a64103625e3c3b6b8eff1dfb6871204fee2b3ab959c9.scope - libcontainer container bd50f95477f7ae92a736a64103625e3c3b6b8eff1dfb6871204fee2b3ab959c9. Jan 23 23:58:27.305007 containerd[1736]: time="2026-01-23T23:58:27.304625236Z" level=info msg="StartContainer for \"eb6c274e5d173aedf0161d835f6bbfe30f5c503db66a7981f59bfe4ac6316b92\" returns successfully" Jan 23 23:58:27.325050 containerd[1736]: time="2026-01-23T23:58:27.324141608Z" level=info msg="StartContainer for \"bd50f95477f7ae92a736a64103625e3c3b6b8eff1dfb6871204fee2b3ab959c9\" returns successfully" Jan 23 23:58:27.332260 containerd[1736]: time="2026-01-23T23:58:27.332218733Z" level=info msg="StartContainer for \"018e33020e17c92f3fd42e79fe08f98f391f449dd02416b2d51bbe58925c56a2\" returns successfully" Jan 23 23:58:27.373414 kubelet[2731]: E0123 23:58:27.373379 2731 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:27.379409 kubelet[2731]: E0123 23:58:27.379046 2731 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:27.380049 kubelet[2731]: E0123 23:58:27.379870 2731 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:28.380718 kubelet[2731]: E0123 23:58:28.380272 2731 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:28.382012 kubelet[2731]: E0123 23:58:28.381990 2731 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:28.474402 kubelet[2731]: I0123 23:58:28.474374 2731 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:29.325778 kubelet[2731]: E0123 23:58:29.325730 2731 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-95a9bf6543\" not found" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:29.426052 kubelet[2731]: I0123 23:58:29.425851 2731 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:29.426052 kubelet[2731]: E0123 23:58:29.425889 2731 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-95a9bf6543\": node \"ci-4081.3.6-n-95a9bf6543\" not found" Jan 23 23:58:29.564678 kubelet[2731]: E0123 23:58:29.564628 2731 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" Jan 23 23:58:29.702315 kubelet[2731]: I0123 23:58:29.701989 2731 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:29.726802 kubelet[2731]: E0123 23:58:29.726625 2731 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-95a9bf6543\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:29.726802 kubelet[2731]: I0123 23:58:29.726682 2731 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:29.730364 kubelet[2731]: E0123 23:58:29.730164 2731 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:29.730364 kubelet[2731]: I0123 23:58:29.730188 2731 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:29.731911 kubelet[2731]: E0123 23:58:29.731865 2731 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-95a9bf6543\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:29.994821 kubelet[2731]: I0123 23:58:29.994595 2731 apiserver.go:52] "Watching apiserver" Jan 23 23:58:30.008696 kubelet[2731]: I0123 23:58:30.008665 2731 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:58:30.466094 kubelet[2731]: I0123 23:58:30.466004 2731 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:30.515873 kubelet[2731]: W0123 23:58:30.515541 2731 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:58:31.700154 systemd[1]: Reloading requested from client PID 3099 ('systemctl') (unit session-9.scope)... Jan 23 23:58:31.700167 systemd[1]: Reloading... Jan 23 23:58:31.781990 zram_generator::config[3139]: No configuration found. Jan 23 23:58:31.895050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:58:31.989801 systemd[1]: Reloading finished in 289 ms. Jan 23 23:58:32.022827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:32.035532 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:58:32.035749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:32.040450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:58:32.138870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:58:32.147634 (kubelet)[3203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:58:32.188968 kubelet[3203]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:58:32.188968 kubelet[3203]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:58:32.188968 kubelet[3203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:58:32.188968 kubelet[3203]: I0123 23:58:32.187539 3203 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:58:32.194091 kubelet[3203]: I0123 23:58:32.194060 3203 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:58:32.194091 kubelet[3203]: I0123 23:58:32.194085 3203 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:58:32.194333 kubelet[3203]: I0123 23:58:32.194315 3203 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:58:32.195590 kubelet[3203]: I0123 23:58:32.195570 3203 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 23:58:32.198008 kubelet[3203]: I0123 23:58:32.197986 3203 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:58:32.201089 kubelet[3203]: E0123 23:58:32.201062 3203 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:58:32.201162 kubelet[3203]: I0123 23:58:32.201105 3203 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:58:32.204060 kubelet[3203]: I0123 23:58:32.204037 3203 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:58:32.206000 kubelet[3203]: I0123 23:58:32.204387 3203 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:58:32.206000 kubelet[3203]: I0123 23:58:32.204419 3203 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-95a9bf6543","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:58:32.206000 kubelet[3203]: I0123 23:58:32.204766 3203 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:58:32.206000 kubelet[3203]: I0123 23:58:32.204780 3203 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:58:32.206220 kubelet[3203]: I0123 23:58:32.204826 3203 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:58:32.206220 kubelet[3203]: I0123 23:58:32.204957 3203 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:58:32.206220 kubelet[3203]: I0123 23:58:32.204970 3203 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:58:32.206220 kubelet[3203]: I0123 23:58:32.204989 3203 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:58:32.206220 kubelet[3203]: I0123 23:58:32.205003 3203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:58:32.207578 kubelet[3203]: I0123 23:58:32.207554 3203 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:58:32.208268 kubelet[3203]: I0123 23:58:32.208246 3203 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:58:32.209568 kubelet[3203]: I0123 23:58:32.209547 3203 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:58:32.209682 kubelet[3203]: I0123 23:58:32.209672 3203 server.go:1287] "Started kubelet" Jan 23 23:58:32.212445 kubelet[3203]: I0123 23:58:32.212424 3203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:58:32.215118 kubelet[3203]: I0123 23:58:32.215096 3203 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:58:32.216259 kubelet[3203]: I0123 23:58:32.216246 3203 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:58:32.216515 kubelet[3203]: E0123 23:58:32.216497 3203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-95a9bf6543\" not found" Jan 23 23:58:32.217388 kubelet[3203]: I0123 23:58:32.217370 3203 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:58:32.217581 kubelet[3203]: I0123 23:58:32.217569 3203 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:58:32.228292 kubelet[3203]: I0123 23:58:32.227716 3203 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:58:32.228292 kubelet[3203]: I0123 23:58:32.227822 3203 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:58:32.239033 kubelet[3203]: I0123 23:58:32.238988 3203 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:58:32.239490 kubelet[3203]: I0123 23:58:32.239448 3203 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:58:32.239748 kubelet[3203]: I0123 23:58:32.239735 3203 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:58:32.243313 kubelet[3203]: I0123 23:58:32.243291 3203 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:58:32.252833 kubelet[3203]: I0123 23:58:32.252809 3203 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:58:32.263643 kubelet[3203]: I0123 23:58:32.263593 3203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:58:32.266590 kubelet[3203]: I0123 23:58:32.266557 3203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:58:32.266590 kubelet[3203]: I0123 23:58:32.266588 3203 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:58:32.266732 kubelet[3203]: I0123 23:58:32.266606 3203 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:58:32.266732 kubelet[3203]: I0123 23:58:32.266612 3203 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:58:32.266732 kubelet[3203]: E0123 23:58:32.266653 3203 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:58:32.321452 kubelet[3203]: I0123 23:58:32.321411 3203 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:58:32.321452 kubelet[3203]: I0123 23:58:32.321427 3203 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:58:32.321452 kubelet[3203]: I0123 23:58:32.321447 3203 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:58:32.321642 kubelet[3203]: I0123 23:58:32.321615 3203 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:58:32.321642 kubelet[3203]: I0123 23:58:32.321626 3203 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:58:32.321642 kubelet[3203]: I0123 23:58:32.321642 3203 policy_none.go:49] "None policy: Start" Jan 23 23:58:32.321719 kubelet[3203]: I0123 23:58:32.321651 3203 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:58:32.321719 kubelet[3203]: I0123 23:58:32.321660 3203 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:58:32.321781 kubelet[3203]: I0123 23:58:32.321750 3203 state_mem.go:75] "Updated machine memory state" Jan 23 23:58:32.325331 kubelet[3203]: I0123 23:58:32.325298 3203 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:58:32.325511 kubelet[3203]: I0123 23:58:32.325456 3203 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:58:32.325511 kubelet[3203]: I0123 23:58:32.325467 3203 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:58:32.326381 kubelet[3203]: I0123 23:58:32.326287 3203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:58:32.330044 kubelet[3203]: E0123 23:58:32.329967 3203 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:58:32.368021 kubelet[3203]: I0123 23:58:32.367987 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556101 kubelet[3203]: I0123 23:58:32.368118 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556101 kubelet[3203]: I0123 23:58:32.368341 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556101 kubelet[3203]: W0123 23:58:32.381191 3203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:58:32.556101 kubelet[3203]: E0123 23:58:32.381257 3203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556101 kubelet[3203]: W0123 23:58:32.381308 3203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:58:32.556101 kubelet[3203]: W0123 23:58:32.381663 3203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:58:32.556101 kubelet[3203]: I0123 23:58:32.418050 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c30a4a50a934bba493952f11af0395c1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-95a9bf6543\" (UID: \"c30a4a50a934bba493952f11af0395c1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556101 kubelet[3203]: I0123 23:58:32.418088 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556310 kubelet[3203]: I0123 23:58:32.418112 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556310 kubelet[3203]: I0123 23:58:32.418128 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556310 kubelet[3203]: I0123 23:58:32.418159 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed21faeef6d83c63012b4ed351a6d55d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-95a9bf6543\" (UID: \"ed21faeef6d83c63012b4ed351a6d55d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556310 kubelet[3203]: I0123 23:58:32.418179 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c30a4a50a934bba493952f11af0395c1-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-95a9bf6543\" (UID: \"c30a4a50a934bba493952f11af0395c1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556310 kubelet[3203]: I0123 23:58:32.418200 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c30a4a50a934bba493952f11af0395c1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-95a9bf6543\" (UID: \"c30a4a50a934bba493952f11af0395c1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556310 kubelet[3203]: I0123 23:58:32.434531 3203 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556436 kubelet[3203]: I0123 23:58:32.447686 3203 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556436 kubelet[3203]: I0123 23:58:32.519218 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.556436 kubelet[3203]: I0123 23:58:32.519280 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b65863bac2e4604abf90512d2ef76f1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" (UID: \"0b65863bac2e4604abf90512d2ef76f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:32.557651 kubelet[3203]: I0123 23:58:32.557392 3203 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:33.215494 kubelet[3203]: I0123 23:58:33.215457 3203 apiserver.go:52] "Watching apiserver" Jan 23 23:58:33.220129 kubelet[3203]: I0123 23:58:33.220103 3203 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:58:33.297445 kubelet[3203]: I0123 23:58:33.297416 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:33.298965 kubelet[3203]: I0123 23:58:33.297825 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:33.309172 kubelet[3203]: W0123 23:58:33.309147 3203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:58:33.309377 kubelet[3203]: E0123 23:58:33.309345 3203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-95a9bf6543\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:33.309862 kubelet[3203]: W0123 23:58:33.309845 3203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:58:33.309955 kubelet[3203]: E0123 23:58:33.309925 3203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-95a9bf6543\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" Jan 23 23:58:33.337914 kubelet[3203]: I0123 23:58:33.337847 3203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-95a9bf6543" podStartSLOduration=3.337827648 podStartE2EDuration="3.337827648s" podCreationTimestamp="2026-01-23 23:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:58:33.324832367 +0000 UTC m=+1.173899449" watchObservedRunningTime="2026-01-23 23:58:33.337827648 +0000 UTC m=+1.186894770" Jan 23 23:58:33.350999 kubelet[3203]: I0123 23:58:33.350935 3203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-95a9bf6543" podStartSLOduration=1.350920249 podStartE2EDuration="1.350920249s" podCreationTimestamp="2026-01-23 23:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:58:33.338091768 +0000 UTC m=+1.187158850" watchObservedRunningTime="2026-01-23 23:58:33.350920249 +0000 UTC m=+1.199987331" Jan 23 23:58:33.369398 kubelet[3203]: I0123 23:58:33.368968 3203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-95a9bf6543" podStartSLOduration=1.368933811 podStartE2EDuration="1.368933811s" podCreationTimestamp="2026-01-23 23:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:58:33.351615409 +0000 UTC m=+1.200682491" watchObservedRunningTime="2026-01-23 23:58:33.368933811 +0000 UTC m=+1.218000893" Jan 23 23:58:37.600146 kubelet[3203]: I0123 23:58:37.600058 3203 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:58:37.601331 containerd[1736]: time="2026-01-23T23:58:37.600912615Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:58:37.601797 kubelet[3203]: I0123 23:58:37.601106 3203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:58:38.521486 systemd[1]: Created slice kubepods-besteffort-pod86f10704_5da5_44fb_87a9_737c64ce8e0f.slice - libcontainer container kubepods-besteffort-pod86f10704_5da5_44fb_87a9_737c64ce8e0f.slice. Jan 23 23:58:38.551186 kubelet[3203]: I0123 23:58:38.551071 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86f10704-5da5-44fb-87a9-737c64ce8e0f-kube-proxy\") pod \"kube-proxy-db5kv\" (UID: \"86f10704-5da5-44fb-87a9-737c64ce8e0f\") " pod="kube-system/kube-proxy-db5kv" Jan 23 23:58:38.551186 kubelet[3203]: I0123 23:58:38.551106 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bzs\" (UniqueName: \"kubernetes.io/projected/86f10704-5da5-44fb-87a9-737c64ce8e0f-kube-api-access-v4bzs\") pod \"kube-proxy-db5kv\" (UID: \"86f10704-5da5-44fb-87a9-737c64ce8e0f\") " pod="kube-system/kube-proxy-db5kv" Jan 23 23:58:38.551186 kubelet[3203]: I0123 23:58:38.551127 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86f10704-5da5-44fb-87a9-737c64ce8e0f-xtables-lock\") pod \"kube-proxy-db5kv\" (UID: \"86f10704-5da5-44fb-87a9-737c64ce8e0f\") " pod="kube-system/kube-proxy-db5kv" Jan 23 23:58:38.551186 kubelet[3203]: I0123 23:58:38.551144 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86f10704-5da5-44fb-87a9-737c64ce8e0f-lib-modules\") pod \"kube-proxy-db5kv\" (UID: \"86f10704-5da5-44fb-87a9-737c64ce8e0f\") " pod="kube-system/kube-proxy-db5kv" Jan 23 23:58:38.691486 systemd[1]: Created slice kubepods-besteffort-pod234fc730_c3eb_464f_87de_5722b424ba99.slice - libcontainer container kubepods-besteffort-pod234fc730_c3eb_464f_87de_5722b424ba99.slice. Jan 23 23:58:38.753373 kubelet[3203]: I0123 23:58:38.753339 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78cn2\" (UniqueName: \"kubernetes.io/projected/234fc730-c3eb-464f-87de-5722b424ba99-kube-api-access-78cn2\") pod \"tigera-operator-7dcd859c48-xzzhc\" (UID: \"234fc730-c3eb-464f-87de-5722b424ba99\") " pod="tigera-operator/tigera-operator-7dcd859c48-xzzhc" Jan 23 23:58:38.753373 kubelet[3203]: I0123 23:58:38.753379 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/234fc730-c3eb-464f-87de-5722b424ba99-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xzzhc\" (UID: \"234fc730-c3eb-464f-87de-5722b424ba99\") " pod="tigera-operator/tigera-operator-7dcd859c48-xzzhc" Jan 23 23:58:38.829841 containerd[1736]: time="2026-01-23T23:58:38.829660061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db5kv,Uid:86f10704-5da5-44fb-87a9-737c64ce8e0f,Namespace:kube-system,Attempt:0,}" Jan 23 23:58:38.873409 containerd[1736]: time="2026-01-23T23:58:38.873111912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:38.873409 containerd[1736]: time="2026-01-23T23:58:38.873161552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:38.873409 containerd[1736]: time="2026-01-23T23:58:38.873176552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:38.873409 containerd[1736]: time="2026-01-23T23:58:38.873245712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:38.895179 systemd[1]: Started cri-containerd-f68056dd387753348f3bac6cd93385e6e89e507e8ad1126c8efe265a007b3733.scope - libcontainer container f68056dd387753348f3bac6cd93385e6e89e507e8ad1126c8efe265a007b3733. Jan 23 23:58:38.911936 containerd[1736]: time="2026-01-23T23:58:38.911892283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db5kv,Uid:86f10704-5da5-44fb-87a9-737c64ce8e0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f68056dd387753348f3bac6cd93385e6e89e507e8ad1126c8efe265a007b3733\"" Jan 23 23:58:38.916079 containerd[1736]: time="2026-01-23T23:58:38.915928924Z" level=info msg="CreateContainer within sandbox \"f68056dd387753348f3bac6cd93385e6e89e507e8ad1126c8efe265a007b3733\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:58:38.957368 containerd[1736]: time="2026-01-23T23:58:38.957324135Z" level=info msg="CreateContainer within sandbox \"f68056dd387753348f3bac6cd93385e6e89e507e8ad1126c8efe265a007b3733\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02249da6dba239c5682c63ee86b34c29d45a3199bef667a827571c5b5dbe7a00\"" Jan 23 23:58:38.960034 containerd[1736]: time="2026-01-23T23:58:38.959032615Z" level=info msg="StartContainer for \"02249da6dba239c5682c63ee86b34c29d45a3199bef667a827571c5b5dbe7a00\"" Jan 23 23:58:38.983157 systemd[1]: Started cri-containerd-02249da6dba239c5682c63ee86b34c29d45a3199bef667a827571c5b5dbe7a00.scope - libcontainer container 02249da6dba239c5682c63ee86b34c29d45a3199bef667a827571c5b5dbe7a00. Jan 23 23:58:38.994731 containerd[1736]: time="2026-01-23T23:58:38.994688505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xzzhc,Uid:234fc730-c3eb-464f-87de-5722b424ba99,Namespace:tigera-operator,Attempt:0,}" Jan 23 23:58:39.014929 containerd[1736]: time="2026-01-23T23:58:39.014811870Z" level=info msg="StartContainer for \"02249da6dba239c5682c63ee86b34c29d45a3199bef667a827571c5b5dbe7a00\" returns successfully" Jan 23 23:58:39.034976 containerd[1736]: time="2026-01-23T23:58:39.034098155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:39.034976 containerd[1736]: time="2026-01-23T23:58:39.034188875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:39.034976 containerd[1736]: time="2026-01-23T23:58:39.034217195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:39.034976 containerd[1736]: time="2026-01-23T23:58:39.034301155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:39.057135 systemd[1]: Started cri-containerd-94a5a1dab3e0ad01ad7b36d968c1222eaa864f2d05cec1fdc2fe3362d516f87f.scope - libcontainer container 94a5a1dab3e0ad01ad7b36d968c1222eaa864f2d05cec1fdc2fe3362d516f87f. Jan 23 23:58:39.090080 containerd[1736]: time="2026-01-23T23:58:39.089964650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xzzhc,Uid:234fc730-c3eb-464f-87de-5722b424ba99,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"94a5a1dab3e0ad01ad7b36d968c1222eaa864f2d05cec1fdc2fe3362d516f87f\"" Jan 23 23:58:39.092221 containerd[1736]: time="2026-01-23T23:58:39.092150331Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 23:58:39.330566 kubelet[3203]: I0123 23:58:39.330257 3203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-db5kv" podStartSLOduration=1.330239354 podStartE2EDuration="1.330239354s" podCreationTimestamp="2026-01-23 23:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:58:39.329937874 +0000 UTC m=+7.179004996" watchObservedRunningTime="2026-01-23 23:58:39.330239354 +0000 UTC m=+7.179306436" Jan 23 23:58:40.900795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585002331.mount: Deactivated successfully. Jan 23 23:58:41.265468 containerd[1736]: time="2026-01-23T23:58:41.265424147Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:41.269576 containerd[1736]: time="2026-01-23T23:58:41.269547868Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 23:58:41.271835 containerd[1736]: time="2026-01-23T23:58:41.271805029Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:41.276772 containerd[1736]: time="2026-01-23T23:58:41.276728150Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:41.277555 containerd[1736]: time="2026-01-23T23:58:41.277444671Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.18525746s" Jan 23 23:58:41.277555 containerd[1736]: time="2026-01-23T23:58:41.277474791Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 23:58:41.279744 containerd[1736]: time="2026-01-23T23:58:41.279713191Z" level=info msg="CreateContainer within sandbox \"94a5a1dab3e0ad01ad7b36d968c1222eaa864f2d05cec1fdc2fe3362d516f87f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 23:58:41.307103 containerd[1736]: time="2026-01-23T23:58:41.307071598Z" level=info msg="CreateContainer within sandbox \"94a5a1dab3e0ad01ad7b36d968c1222eaa864f2d05cec1fdc2fe3362d516f87f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6fdc69cc7c3e28ffde6b7080a4f2872bb297d8dd9a7afcc50916b4ddcd56368c\"" Jan 23 23:58:41.307971 containerd[1736]: time="2026-01-23T23:58:41.307553439Z" level=info msg="StartContainer for \"6fdc69cc7c3e28ffde6b7080a4f2872bb297d8dd9a7afcc50916b4ddcd56368c\"" Jan 23 23:58:41.331278 systemd[1]: Started cri-containerd-6fdc69cc7c3e28ffde6b7080a4f2872bb297d8dd9a7afcc50916b4ddcd56368c.scope - libcontainer container 6fdc69cc7c3e28ffde6b7080a4f2872bb297d8dd9a7afcc50916b4ddcd56368c. Jan 23 23:58:41.357615 containerd[1736]: time="2026-01-23T23:58:41.357260252Z" level=info msg="StartContainer for \"6fdc69cc7c3e28ffde6b7080a4f2872bb297d8dd9a7afcc50916b4ddcd56368c\" returns successfully" Jan 23 23:58:42.327750 kubelet[3203]: I0123 23:58:42.327568 3203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xzzhc" podStartSLOduration=2.140754208 podStartE2EDuration="4.327549069s" podCreationTimestamp="2026-01-23 23:58:38 +0000 UTC" firstStartedPulling="2026-01-23 23:58:39.09146521 +0000 UTC m=+6.940532292" lastFinishedPulling="2026-01-23 23:58:41.278260071 +0000 UTC m=+9.127327153" observedRunningTime="2026-01-23 23:58:42.327252189 +0000 UTC m=+10.176319271" watchObservedRunningTime="2026-01-23 23:58:42.327549069 +0000 UTC m=+10.176616151" Jan 23 23:58:47.284199 sudo[2237]: pam_unix(sudo:session): session closed for user root Jan 23 23:58:47.361400 sshd[2234]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:47.366505 systemd[1]: sshd@6-10.200.20.27:22-10.200.16.10:51334.service: Deactivated successfully. Jan 23 23:58:47.367937 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:58:47.369089 systemd[1]: session-9.scope: Consumed 5.777s CPU time, 149.2M memory peak, 0B memory swap peak. Jan 23 23:58:47.372232 systemd-logind[1715]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:58:47.374462 systemd-logind[1715]: Removed session 9. Jan 23 23:58:55.888691 kubelet[3203]: I0123 23:58:55.888270 3203 status_manager.go:890] "Failed to get status for pod" podUID="d4d90793-4654-4acc-b5ca-3915e48cebac" pod="calico-system/calico-typha-75b8f697b8-bkw6x" err="pods \"calico-typha-75b8f697b8-bkw6x\" is forbidden: User \"system:node:ci-4081.3.6-n-95a9bf6543\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object" Jan 23 23:58:55.888691 kubelet[3203]: W0123 23:58:55.888327 3203 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081.3.6-n-95a9bf6543" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object Jan 23 23:58:55.888691 kubelet[3203]: E0123 23:58:55.888348 3203 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081.3.6-n-95a9bf6543\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object" logger="UnhandledError" Jan 23 23:58:55.888691 kubelet[3203]: W0123 23:58:55.888442 3203 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-n-95a9bf6543" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object Jan 23 23:58:55.889124 kubelet[3203]: E0123 23:58:55.888459 3203 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-n-95a9bf6543\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object" logger="UnhandledError" Jan 23 23:58:55.889124 kubelet[3203]: W0123 23:58:55.888493 3203 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081.3.6-n-95a9bf6543" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object Jan 23 23:58:55.889124 kubelet[3203]: E0123 23:58:55.888503 3203 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.6-n-95a9bf6543\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object" logger="UnhandledError" Jan 23 23:58:55.894004 systemd[1]: Created slice kubepods-besteffort-podd4d90793_4654_4acc_b5ca_3915e48cebac.slice - libcontainer container kubepods-besteffort-podd4d90793_4654_4acc_b5ca_3915e48cebac.slice. Jan 23 23:58:55.948817 kubelet[3203]: I0123 23:58:55.948697 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4d90793-4654-4acc-b5ca-3915e48cebac-tigera-ca-bundle\") pod \"calico-typha-75b8f697b8-bkw6x\" (UID: \"d4d90793-4654-4acc-b5ca-3915e48cebac\") " pod="calico-system/calico-typha-75b8f697b8-bkw6x" Jan 23 23:58:55.948817 kubelet[3203]: I0123 23:58:55.948742 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d4d90793-4654-4acc-b5ca-3915e48cebac-typha-certs\") pod \"calico-typha-75b8f697b8-bkw6x\" (UID: \"d4d90793-4654-4acc-b5ca-3915e48cebac\") " pod="calico-system/calico-typha-75b8f697b8-bkw6x" Jan 23 23:58:55.948817 kubelet[3203]: I0123 23:58:55.948763 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4s9\" (UniqueName: \"kubernetes.io/projected/d4d90793-4654-4acc-b5ca-3915e48cebac-kube-api-access-9h4s9\") pod \"calico-typha-75b8f697b8-bkw6x\" (UID: \"d4d90793-4654-4acc-b5ca-3915e48cebac\") " pod="calico-system/calico-typha-75b8f697b8-bkw6x" Jan 23 23:58:56.121307 systemd[1]: Created slice kubepods-besteffort-pod96c301d2_102f_4954_814e_9ad2901dcc4b.slice - libcontainer container kubepods-besteffort-pod96c301d2_102f_4954_814e_9ad2901dcc4b.slice. Jan 23 23:58:56.250506 kubelet[3203]: I0123 23:58:56.250326 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96c301d2-102f-4954-814e-9ad2901dcc4b-tigera-ca-bundle\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250506 kubelet[3203]: I0123 23:58:56.250369 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/96c301d2-102f-4954-814e-9ad2901dcc4b-cni-bin-dir\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250506 kubelet[3203]: I0123 23:58:56.250385 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/96c301d2-102f-4954-814e-9ad2901dcc4b-cni-log-dir\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250506 kubelet[3203]: I0123 23:58:56.250401 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/96c301d2-102f-4954-814e-9ad2901dcc4b-var-lib-calico\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250506 kubelet[3203]: I0123 23:58:56.250419 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/96c301d2-102f-4954-814e-9ad2901dcc4b-var-run-calico\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250764 kubelet[3203]: I0123 23:58:56.250433 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96c301d2-102f-4954-814e-9ad2901dcc4b-xtables-lock\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250764 kubelet[3203]: I0123 23:58:56.250450 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/96c301d2-102f-4954-814e-9ad2901dcc4b-flexvol-driver-host\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250764 kubelet[3203]: I0123 23:58:56.250474 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96c301d2-102f-4954-814e-9ad2901dcc4b-lib-modules\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250764 kubelet[3203]: I0123 23:58:56.250531 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/96c301d2-102f-4954-814e-9ad2901dcc4b-node-certs\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250764 kubelet[3203]: I0123 23:58:56.250570 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29pkc\" (UniqueName: \"kubernetes.io/projected/96c301d2-102f-4954-814e-9ad2901dcc4b-kube-api-access-29pkc\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250875 kubelet[3203]: I0123 23:58:56.250590 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/96c301d2-102f-4954-814e-9ad2901dcc4b-policysync\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.250875 kubelet[3203]: I0123 23:58:56.250611 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/96c301d2-102f-4954-814e-9ad2901dcc4b-cni-net-dir\") pod \"calico-node-lxp5v\" (UID: \"96c301d2-102f-4954-814e-9ad2901dcc4b\") " pod="calico-system/calico-node-lxp5v" Jan 23 23:58:56.310299 kubelet[3203]: E0123 23:58:56.309723 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:58:56.352601 kubelet[3203]: E0123 23:58:56.352558 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.352601 kubelet[3203]: W0123 23:58:56.352593 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.352836 kubelet[3203]: E0123 23:58:56.352620 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.353078 kubelet[3203]: E0123 23:58:56.352898 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.353078 kubelet[3203]: W0123 23:58:56.352910 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.353078 kubelet[3203]: E0123 23:58:56.352921 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.354027 kubelet[3203]: E0123 23:58:56.354009 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.354177 kubelet[3203]: W0123 23:58:56.354107 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.354177 kubelet[3203]: E0123 23:58:56.354127 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.354464 kubelet[3203]: E0123 23:58:56.354451 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.354604 kubelet[3203]: W0123 23:58:56.354499 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.354604 kubelet[3203]: E0123 23:58:56.354512 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.357110 kubelet[3203]: E0123 23:58:56.357015 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.357110 kubelet[3203]: W0123 23:58:56.357040 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.357110 kubelet[3203]: E0123 23:58:56.357058 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.357422 kubelet[3203]: E0123 23:58:56.357410 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.357667 kubelet[3203]: W0123 23:58:56.357596 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.358473 kubelet[3203]: E0123 23:58:56.358436 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.358473 kubelet[3203]: W0123 23:58:56.358455 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.358651 kubelet[3203]: E0123 23:58:56.357935 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.358651 kubelet[3203]: E0123 23:58:56.358548 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.358752 kubelet[3203]: E0123 23:58:56.358738 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.358752 kubelet[3203]: W0123 23:58:56.358751 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.358882 kubelet[3203]: E0123 23:58:56.358796 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.359028 kubelet[3203]: E0123 23:58:56.358925 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.359028 kubelet[3203]: W0123 23:58:56.358938 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.359028 kubelet[3203]: E0123 23:58:56.359004 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.359128 kubelet[3203]: E0123 23:58:56.359099 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.359128 kubelet[3203]: W0123 23:58:56.359106 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.359751 kubelet[3203]: E0123 23:58:56.359184 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.359751 kubelet[3203]: E0123 23:58:56.359274 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.359751 kubelet[3203]: W0123 23:58:56.359280 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.359751 kubelet[3203]: E0123 23:58:56.359387 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.359751 kubelet[3203]: E0123 23:58:56.359448 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.359751 kubelet[3203]: W0123 23:58:56.359454 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.359751 kubelet[3203]: E0123 23:58:56.359466 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.360403 kubelet[3203]: E0123 23:58:56.360095 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.360403 kubelet[3203]: W0123 23:58:56.360107 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.360403 kubelet[3203]: E0123 23:58:56.360176 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.360403 kubelet[3203]: E0123 23:58:56.360294 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.360403 kubelet[3203]: W0123 23:58:56.360302 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.360403 kubelet[3203]: E0123 23:58:56.360375 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.361167 kubelet[3203]: E0123 23:58:56.360481 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.361167 kubelet[3203]: W0123 23:58:56.360489 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.361167 kubelet[3203]: E0123 23:58:56.360563 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.361167 kubelet[3203]: E0123 23:58:56.360644 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.361167 kubelet[3203]: W0123 23:58:56.360650 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.361167 kubelet[3203]: E0123 23:58:56.360676 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.361167 kubelet[3203]: E0123 23:58:56.360781 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.361167 kubelet[3203]: W0123 23:58:56.360788 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.361167 kubelet[3203]: E0123 23:58:56.360812 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.361167 kubelet[3203]: E0123 23:58:56.360915 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.361374 kubelet[3203]: W0123 23:58:56.360922 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.361374 kubelet[3203]: E0123 23:58:56.360938 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.361374 kubelet[3203]: E0123 23:58:56.361114 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.361374 kubelet[3203]: W0123 23:58:56.361121 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.361374 kubelet[3203]: E0123 23:58:56.361135 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.361374 kubelet[3203]: E0123 23:58:56.361299 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.361374 kubelet[3203]: W0123 23:58:56.361307 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.361374 kubelet[3203]: E0123 23:58:56.361315 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.361533 kubelet[3203]: E0123 23:58:56.361443 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.361533 kubelet[3203]: W0123 23:58:56.361450 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.361533 kubelet[3203]: E0123 23:58:56.361458 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.362976 kubelet[3203]: E0123 23:58:56.361591 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.362976 kubelet[3203]: W0123 23:58:56.361604 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.362976 kubelet[3203]: E0123 23:58:56.361612 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.362976 kubelet[3203]: E0123 23:58:56.362032 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.362976 kubelet[3203]: W0123 23:58:56.362043 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.362976 kubelet[3203]: E0123 23:58:56.362054 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.362976 kubelet[3203]: E0123 23:58:56.362294 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.362976 kubelet[3203]: W0123 23:58:56.362315 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.362976 kubelet[3203]: E0123 23:58:56.362325 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.362976 kubelet[3203]: E0123 23:58:56.362500 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.363243 kubelet[3203]: W0123 23:58:56.362509 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.363243 kubelet[3203]: E0123 23:58:56.362517 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.452657 kubelet[3203]: E0123 23:58:56.452481 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.452657 kubelet[3203]: W0123 23:58:56.452505 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.452657 kubelet[3203]: E0123 23:58:56.452524 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.452657 kubelet[3203]: I0123 23:58:56.452554 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89876e47-5c25-4ed8-975b-aadadd46d2c9-socket-dir\") pod \"csi-node-driver-phrmd\" (UID: \"89876e47-5c25-4ed8-975b-aadadd46d2c9\") " pod="calico-system/csi-node-driver-phrmd" Jan 23 23:58:56.453153 kubelet[3203]: E0123 23:58:56.452980 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.453153 kubelet[3203]: W0123 23:58:56.452996 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.453153 kubelet[3203]: E0123 23:58:56.453016 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.453153 kubelet[3203]: I0123 23:58:56.453032 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/89876e47-5c25-4ed8-975b-aadadd46d2c9-varrun\") pod \"csi-node-driver-phrmd\" (UID: \"89876e47-5c25-4ed8-975b-aadadd46d2c9\") " pod="calico-system/csi-node-driver-phrmd" Jan 23 23:58:56.453437 kubelet[3203]: E0123 23:58:56.453348 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.453437 kubelet[3203]: W0123 23:58:56.453361 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.453437 kubelet[3203]: E0123 23:58:56.453379 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.453437 kubelet[3203]: I0123 23:58:56.453395 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89876e47-5c25-4ed8-975b-aadadd46d2c9-kubelet-dir\") pod \"csi-node-driver-phrmd\" (UID: \"89876e47-5c25-4ed8-975b-aadadd46d2c9\") " pod="calico-system/csi-node-driver-phrmd" Jan 23 23:58:56.453817 kubelet[3203]: E0123 23:58:56.453729 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.453817 kubelet[3203]: W0123 23:58:56.453742 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.453817 kubelet[3203]: E0123 23:58:56.453760 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.453817 kubelet[3203]: I0123 23:58:56.453776 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89876e47-5c25-4ed8-975b-aadadd46d2c9-registration-dir\") pod \"csi-node-driver-phrmd\" (UID: \"89876e47-5c25-4ed8-975b-aadadd46d2c9\") " pod="calico-system/csi-node-driver-phrmd" Jan 23 23:58:56.454203 kubelet[3203]: E0123 23:58:56.454117 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.454203 kubelet[3203]: W0123 23:58:56.454131 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.454348 kubelet[3203]: E0123 23:58:56.454294 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.454348 kubelet[3203]: I0123 23:58:56.454318 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmgrq\" (UniqueName: \"kubernetes.io/projected/89876e47-5c25-4ed8-975b-aadadd46d2c9-kube-api-access-dmgrq\") pod \"csi-node-driver-phrmd\" (UID: \"89876e47-5c25-4ed8-975b-aadadd46d2c9\") " pod="calico-system/csi-node-driver-phrmd" Jan 23 23:58:56.454630 kubelet[3203]: E0123 23:58:56.454575 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.454630 kubelet[3203]: W0123 23:58:56.454587 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.454755 kubelet[3203]: E0123 23:58:56.454669 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.455016 kubelet[3203]: E0123 23:58:56.454936 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.455016 kubelet[3203]: W0123 23:58:56.454968 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.455201 kubelet[3203]: E0123 23:58:56.455049 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.455319 kubelet[3203]: E0123 23:58:56.455294 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.455319 kubelet[3203]: W0123 23:58:56.455305 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.455504 kubelet[3203]: E0123 23:58:56.455460 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.455643 kubelet[3203]: E0123 23:58:56.455620 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.455643 kubelet[3203]: W0123 23:58:56.455631 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.455818 kubelet[3203]: E0123 23:58:56.455776 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.455968 kubelet[3203]: E0123 23:58:56.455912 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.455968 kubelet[3203]: W0123 23:58:56.455921 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.456155 kubelet[3203]: E0123 23:58:56.456109 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.456343 kubelet[3203]: E0123 23:58:56.456273 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.456343 kubelet[3203]: W0123 23:58:56.456283 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.456478 kubelet[3203]: E0123 23:58:56.456293 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.456705 kubelet[3203]: E0123 23:58:56.456692 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.456800 kubelet[3203]: W0123 23:58:56.456767 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.456800 kubelet[3203]: E0123 23:58:56.456784 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.457236 kubelet[3203]: E0123 23:58:56.457161 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.457236 kubelet[3203]: W0123 23:58:56.457174 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.457236 kubelet[3203]: E0123 23:58:56.457185 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.457566 kubelet[3203]: E0123 23:58:56.457495 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.457566 kubelet[3203]: W0123 23:58:56.457507 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.457566 kubelet[3203]: E0123 23:58:56.457517 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.457868 kubelet[3203]: E0123 23:58:56.457829 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.457868 kubelet[3203]: W0123 23:58:56.457840 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.457868 kubelet[3203]: E0123 23:58:56.457851 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.555972 kubelet[3203]: E0123 23:58:56.555136 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.555972 kubelet[3203]: W0123 23:58:56.555158 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.555972 kubelet[3203]: E0123 23:58:56.555177 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.555972 kubelet[3203]: E0123 23:58:56.555407 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.555972 kubelet[3203]: W0123 23:58:56.555416 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.555972 kubelet[3203]: E0123 23:58:56.555432 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.555972 kubelet[3203]: E0123 23:58:56.555580 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.555972 kubelet[3203]: W0123 23:58:56.555587 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.555972 kubelet[3203]: E0123 23:58:56.555595 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.555972 kubelet[3203]: E0123 23:58:56.555726 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.557485 kubelet[3203]: W0123 23:58:56.555732 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.557485 kubelet[3203]: E0123 23:58:56.555739 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.557485 kubelet[3203]: E0123 23:58:56.555870 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.557485 kubelet[3203]: W0123 23:58:56.555877 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.557485 kubelet[3203]: E0123 23:58:56.555884 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.557485 kubelet[3203]: E0123 23:58:56.556053 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.557485 kubelet[3203]: W0123 23:58:56.556061 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.557485 kubelet[3203]: E0123 23:58:56.556070 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.557485 kubelet[3203]: E0123 23:58:56.556190 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.557485 kubelet[3203]: W0123 23:58:56.556197 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558364 kubelet[3203]: E0123 23:58:56.556206 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558364 kubelet[3203]: E0123 23:58:56.556332 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558364 kubelet[3203]: W0123 23:58:56.556338 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558364 kubelet[3203]: E0123 23:58:56.556352 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558364 kubelet[3203]: E0123 23:58:56.556573 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558364 kubelet[3203]: W0123 23:58:56.556582 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558364 kubelet[3203]: E0123 23:58:56.556591 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558364 kubelet[3203]: E0123 23:58:56.556735 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558364 kubelet[3203]: W0123 23:58:56.556742 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558364 kubelet[3203]: E0123 23:58:56.556763 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558572 kubelet[3203]: E0123 23:58:56.556892 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558572 kubelet[3203]: W0123 23:58:56.556900 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558572 kubelet[3203]: E0123 23:58:56.557023 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558572 kubelet[3203]: E0123 23:58:56.557036 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558572 kubelet[3203]: W0123 23:58:56.557044 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558572 kubelet[3203]: E0123 23:58:56.557070 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558572 kubelet[3203]: E0123 23:58:56.557167 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558572 kubelet[3203]: W0123 23:58:56.557174 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558572 kubelet[3203]: E0123 23:58:56.557189 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558572 kubelet[3203]: E0123 23:58:56.557309 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558780 kubelet[3203]: W0123 23:58:56.557315 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558780 kubelet[3203]: E0123 23:58:56.557322 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558780 kubelet[3203]: E0123 23:58:56.557440 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558780 kubelet[3203]: W0123 23:58:56.557446 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558780 kubelet[3203]: E0123 23:58:56.557453 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558780 kubelet[3203]: E0123 23:58:56.557715 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558780 kubelet[3203]: W0123 23:58:56.557723 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558780 kubelet[3203]: E0123 23:58:56.557734 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558780 kubelet[3203]: E0123 23:58:56.557863 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558780 kubelet[3203]: W0123 23:58:56.557870 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558997 kubelet[3203]: E0123 23:58:56.557886 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.558997 kubelet[3203]: E0123 23:58:56.558029 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.558997 kubelet[3203]: W0123 23:58:56.558036 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.558997 kubelet[3203]: E0123 23:58:56.558050 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.560625 kubelet[3203]: E0123 23:58:56.559189 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.560625 kubelet[3203]: W0123 23:58:56.559202 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.560625 kubelet[3203]: E0123 23:58:56.559220 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.560625 kubelet[3203]: E0123 23:58:56.559542 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.560625 kubelet[3203]: W0123 23:58:56.559552 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.560625 kubelet[3203]: E0123 23:58:56.559600 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.560625 kubelet[3203]: E0123 23:58:56.560416 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.560625 kubelet[3203]: W0123 23:58:56.560428 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.560625 kubelet[3203]: E0123 23:58:56.560448 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.561954 kubelet[3203]: E0123 23:58:56.561174 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.561954 kubelet[3203]: W0123 23:58:56.561190 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.561954 kubelet[3203]: E0123 23:58:56.561378 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.561954 kubelet[3203]: E0123 23:58:56.561520 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.561954 kubelet[3203]: W0123 23:58:56.561529 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.561954 kubelet[3203]: E0123 23:58:56.561542 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.562155 kubelet[3203]: E0123 23:58:56.562060 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.562155 kubelet[3203]: W0123 23:58:56.562072 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.562155 kubelet[3203]: E0123 23:58:56.562095 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.564157 kubelet[3203]: E0123 23:58:56.564138 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.564157 kubelet[3203]: W0123 23:58:56.564153 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.564247 kubelet[3203]: E0123 23:58:56.564166 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.880618 kubelet[3203]: E0123 23:58:56.880534 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.880618 kubelet[3203]: W0123 23:58:56.880558 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.880618 kubelet[3203]: E0123 23:58:56.880577 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.886053 kubelet[3203]: E0123 23:58:56.886024 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.886053 kubelet[3203]: W0123 23:58:56.886043 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.886168 kubelet[3203]: E0123 23:58:56.886062 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.900209 kubelet[3203]: E0123 23:58:56.900173 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.900760 kubelet[3203]: W0123 23:58:56.900266 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.900760 kubelet[3203]: E0123 23:58:56.900348 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.901247 kubelet[3203]: E0123 23:58:56.900927 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.901247 kubelet[3203]: W0123 23:58:56.900949 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.901247 kubelet[3203]: E0123 23:58:56.900964 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.913403 kubelet[3203]: E0123 23:58:56.913382 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.913528 kubelet[3203]: W0123 23:58:56.913516 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.913597 kubelet[3203]: E0123 23:58:56.913586 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:56.913853 kubelet[3203]: E0123 23:58:56.913832 3203 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:58:56.913853 kubelet[3203]: W0123 23:58:56.913850 3203 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:58:56.913917 kubelet[3203]: E0123 23:58:56.913864 3203 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:58:57.028714 containerd[1736]: time="2026-01-23T23:58:57.028322023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lxp5v,Uid:96c301d2-102f-4954-814e-9ad2901dcc4b,Namespace:calico-system,Attempt:0,}" Jan 23 23:58:57.067620 containerd[1736]: time="2026-01-23T23:58:57.067525469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:57.067620 containerd[1736]: time="2026-01-23T23:58:57.067581829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:57.067620 containerd[1736]: time="2026-01-23T23:58:57.067593229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:57.067620 containerd[1736]: time="2026-01-23T23:58:57.067673669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:57.089198 systemd[1]: Started cri-containerd-38a587c3a63a368f6d57ee151c5371086b3bd3d60ea7221eadbf7bcc276f3e9d.scope - libcontainer container 38a587c3a63a368f6d57ee151c5371086b3bd3d60ea7221eadbf7bcc276f3e9d. Jan 23 23:58:57.104272 containerd[1736]: time="2026-01-23T23:58:57.104235835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75b8f697b8-bkw6x,Uid:d4d90793-4654-4acc-b5ca-3915e48cebac,Namespace:calico-system,Attempt:0,}" Jan 23 23:58:57.110662 containerd[1736]: time="2026-01-23T23:58:57.110625236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lxp5v,Uid:96c301d2-102f-4954-814e-9ad2901dcc4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"38a587c3a63a368f6d57ee151c5371086b3bd3d60ea7221eadbf7bcc276f3e9d\"" Jan 23 23:58:57.112346 containerd[1736]: time="2026-01-23T23:58:57.112237716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:58:57.161638 containerd[1736]: time="2026-01-23T23:58:57.159762683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:57.161638 containerd[1736]: time="2026-01-23T23:58:57.159818723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:57.161638 containerd[1736]: time="2026-01-23T23:58:57.159834123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:57.161638 containerd[1736]: time="2026-01-23T23:58:57.159912683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:57.184108 systemd[1]: Started cri-containerd-bb735d1f3febe1e5509e6dbeeaee11f308e3645fa42319bd884843f251d88fe4.scope - libcontainer container bb735d1f3febe1e5509e6dbeeaee11f308e3645fa42319bd884843f251d88fe4. Jan 23 23:58:57.211810 containerd[1736]: time="2026-01-23T23:58:57.211768651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75b8f697b8-bkw6x,Uid:d4d90793-4654-4acc-b5ca-3915e48cebac,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb735d1f3febe1e5509e6dbeeaee11f308e3645fa42319bd884843f251d88fe4\"" Jan 23 23:58:58.269548 kubelet[3203]: E0123 23:58:58.268212 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:58:58.355816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641405966.mount: Deactivated successfully. Jan 23 23:58:58.491485 containerd[1736]: time="2026-01-23T23:58:58.491433056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:58.495029 containerd[1736]: time="2026-01-23T23:58:58.495001537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Jan 23 23:58:58.498816 containerd[1736]: time="2026-01-23T23:58:58.497684178Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:58.501927 containerd[1736]: time="2026-01-23T23:58:58.501891580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:58.502490 containerd[1736]: time="2026-01-23T23:58:58.502449260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.390163984s" Jan 23 23:58:58.502490 containerd[1736]: time="2026-01-23T23:58:58.502482380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:58:58.503851 containerd[1736]: time="2026-01-23T23:58:58.503827901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 23:58:58.506113 containerd[1736]: time="2026-01-23T23:58:58.506068222Z" level=info msg="CreateContainer within sandbox \"38a587c3a63a368f6d57ee151c5371086b3bd3d60ea7221eadbf7bcc276f3e9d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:58:58.547710 containerd[1736]: time="2026-01-23T23:58:58.547606360Z" level=info msg="CreateContainer within sandbox \"38a587c3a63a368f6d57ee151c5371086b3bd3d60ea7221eadbf7bcc276f3e9d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9df0c0fd05bda7519d9e2e38d3bc804c95b3f9fbd3a196c01c63c2bb1a10dd3f\"" Jan 23 23:58:58.549177 containerd[1736]: time="2026-01-23T23:58:58.548698960Z" level=info msg="StartContainer for \"9df0c0fd05bda7519d9e2e38d3bc804c95b3f9fbd3a196c01c63c2bb1a10dd3f\"" Jan 23 23:58:58.582097 systemd[1]: Started cri-containerd-9df0c0fd05bda7519d9e2e38d3bc804c95b3f9fbd3a196c01c63c2bb1a10dd3f.scope - libcontainer container 9df0c0fd05bda7519d9e2e38d3bc804c95b3f9fbd3a196c01c63c2bb1a10dd3f. Jan 23 23:58:58.611430 containerd[1736]: time="2026-01-23T23:58:58.611373548Z" level=info msg="StartContainer for \"9df0c0fd05bda7519d9e2e38d3bc804c95b3f9fbd3a196c01c63c2bb1a10dd3f\" returns successfully" Jan 23 23:58:58.621703 systemd[1]: cri-containerd-9df0c0fd05bda7519d9e2e38d3bc804c95b3f9fbd3a196c01c63c2bb1a10dd3f.scope: Deactivated successfully. Jan 23 23:58:58.643146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9df0c0fd05bda7519d9e2e38d3bc804c95b3f9fbd3a196c01c63c2bb1a10dd3f-rootfs.mount: Deactivated successfully. Jan 23 23:58:59.242747 containerd[1736]: time="2026-01-23T23:58:59.242672101Z" level=info msg="shim disconnected" id=9df0c0fd05bda7519d9e2e38d3bc804c95b3f9fbd3a196c01c63c2bb1a10dd3f namespace=k8s.io Jan 23 23:58:59.242747 containerd[1736]: time="2026-01-23T23:58:59.242740261Z" level=warning msg="cleaning up after shim disconnected" id=9df0c0fd05bda7519d9e2e38d3bc804c95b3f9fbd3a196c01c63c2bb1a10dd3f namespace=k8s.io Jan 23 23:58:59.242747 containerd[1736]: time="2026-01-23T23:58:59.242749661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:00.267314 kubelet[3203]: E0123 23:59:00.267258 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:00.618743 containerd[1736]: time="2026-01-23T23:59:00.618628057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:00.621445 containerd[1736]: time="2026-01-23T23:59:00.621395498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Jan 23 23:59:00.624893 containerd[1736]: time="2026-01-23T23:59:00.624840860Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:00.630107 containerd[1736]: time="2026-01-23T23:59:00.628924782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:00.630107 containerd[1736]: time="2026-01-23T23:59:00.629454462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.124668841s" Jan 23 23:59:00.630107 containerd[1736]: time="2026-01-23T23:59:00.629771022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 23:59:00.631410 containerd[1736]: time="2026-01-23T23:59:00.631381903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:59:00.640606 containerd[1736]: time="2026-01-23T23:59:00.640573747Z" level=info msg="CreateContainer within sandbox \"bb735d1f3febe1e5509e6dbeeaee11f308e3645fa42319bd884843f251d88fe4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 23:59:00.681048 containerd[1736]: time="2026-01-23T23:59:00.680937764Z" level=info msg="CreateContainer within sandbox \"bb735d1f3febe1e5509e6dbeeaee11f308e3645fa42319bd884843f251d88fe4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a96f22db16491af3f4084f765df66103618535f61953e2c360a4dd19c946cea9\"" Jan 23 23:59:00.682415 containerd[1736]: time="2026-01-23T23:59:00.681454924Z" level=info msg="StartContainer for \"a96f22db16491af3f4084f765df66103618535f61953e2c360a4dd19c946cea9\"" Jan 23 23:59:00.709086 systemd[1]: Started cri-containerd-a96f22db16491af3f4084f765df66103618535f61953e2c360a4dd19c946cea9.scope - libcontainer container a96f22db16491af3f4084f765df66103618535f61953e2c360a4dd19c946cea9. Jan 23 23:59:00.744815 containerd[1736]: time="2026-01-23T23:59:00.744755472Z" level=info msg="StartContainer for \"a96f22db16491af3f4084f765df66103618535f61953e2c360a4dd19c946cea9\" returns successfully" Jan 23 23:59:02.275421 kubelet[3203]: E0123 23:59:02.274381 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:02.358271 kubelet[3203]: I0123 23:59:02.358228 3203 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:59:03.005379 containerd[1736]: time="2026-01-23T23:59:03.004392291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:03.006683 containerd[1736]: time="2026-01-23T23:59:03.006657212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:59:03.009502 containerd[1736]: time="2026-01-23T23:59:03.009473453Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:03.014310 containerd[1736]: time="2026-01-23T23:59:03.014270575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:03.015184 containerd[1736]: time="2026-01-23T23:59:03.015153976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.383617433s" Jan 23 23:59:03.015299 containerd[1736]: time="2026-01-23T23:59:03.015282256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:59:03.017975 containerd[1736]: time="2026-01-23T23:59:03.017914817Z" level=info msg="CreateContainer within sandbox \"38a587c3a63a368f6d57ee151c5371086b3bd3d60ea7221eadbf7bcc276f3e9d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:59:03.062489 containerd[1736]: time="2026-01-23T23:59:03.062356236Z" level=info msg="CreateContainer within sandbox \"38a587c3a63a368f6d57ee151c5371086b3bd3d60ea7221eadbf7bcc276f3e9d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9150da5715a28d64bb52a884a4e58a0fed0d3618c26ffebf10a527525eb3e46d\"" Jan 23 23:59:03.064391 containerd[1736]: time="2026-01-23T23:59:03.062932836Z" level=info msg="StartContainer for \"9150da5715a28d64bb52a884a4e58a0fed0d3618c26ffebf10a527525eb3e46d\"" Jan 23 23:59:03.093642 systemd[1]: Started cri-containerd-9150da5715a28d64bb52a884a4e58a0fed0d3618c26ffebf10a527525eb3e46d.scope - libcontainer container 9150da5715a28d64bb52a884a4e58a0fed0d3618c26ffebf10a527525eb3e46d. Jan 23 23:59:03.125800 containerd[1736]: time="2026-01-23T23:59:03.125684623Z" level=info msg="StartContainer for \"9150da5715a28d64bb52a884a4e58a0fed0d3618c26ffebf10a527525eb3e46d\" returns successfully" Jan 23 23:59:03.379921 kubelet[3203]: I0123 23:59:03.379770 3203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75b8f697b8-bkw6x" podStartSLOduration=4.962804962 podStartE2EDuration="8.379751613s" podCreationTimestamp="2026-01-23 23:58:55 +0000 UTC" firstStartedPulling="2026-01-23 23:58:57.213749811 +0000 UTC m=+25.062816893" lastFinishedPulling="2026-01-23 23:59:00.630696462 +0000 UTC m=+28.479763544" observedRunningTime="2026-01-23 23:59:01.368664302 +0000 UTC m=+29.217731384" watchObservedRunningTime="2026-01-23 23:59:03.379751613 +0000 UTC m=+31.228818695" Jan 23 23:59:04.266937 kubelet[3203]: E0123 23:59:04.266889 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:04.296787 containerd[1736]: time="2026-01-23T23:59:04.296695771Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:59:04.300521 systemd[1]: cri-containerd-9150da5715a28d64bb52a884a4e58a0fed0d3618c26ffebf10a527525eb3e46d.scope: Deactivated successfully. Jan 23 23:59:04.321854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9150da5715a28d64bb52a884a4e58a0fed0d3618c26ffebf10a527525eb3e46d-rootfs.mount: Deactivated successfully. Jan 23 23:59:04.344197 kubelet[3203]: I0123 23:59:04.343842 3203 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:59:04.387692 systemd[1]: Created slice kubepods-burstable-pod5d2e99f6_dee0_4678_aa07_fbf33b420e68.slice - libcontainer container kubepods-burstable-pod5d2e99f6_dee0_4678_aa07_fbf33b420e68.slice. Jan 23 23:59:04.398899 kubelet[3203]: W0123 23:59:04.398760 3203 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-n-95a9bf6543" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object Jan 23 23:59:04.398899 kubelet[3203]: E0123 23:59:04.398794 3203 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-n-95a9bf6543\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object" logger="UnhandledError" Jan 23 23:59:04.404184 kubelet[3203]: W0123 23:59:04.401551 3203 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.6-n-95a9bf6543" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object Jan 23 23:59:04.404184 kubelet[3203]: E0123 23:59:04.401585 3203 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.6-n-95a9bf6543\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-n-95a9bf6543' and this object" logger="UnhandledError" Jan 23 23:59:04.406433 systemd[1]: Created slice kubepods-burstable-pod30575e89_2706_4309_ac97_5d65652326e6.slice - libcontainer container kubepods-burstable-pod30575e89_2706_4309_ac97_5d65652326e6.slice. Jan 23 23:59:04.410984 kubelet[3203]: I0123 23:59:04.409764 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a31be8f9-573e-4955-99b0-981cca2e99b2-tigera-ca-bundle\") pod \"calico-kube-controllers-6977ffbc55-s4jdp\" (UID: \"a31be8f9-573e-4955-99b0-981cca2e99b2\") " pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" Jan 23 23:59:04.410984 kubelet[3203]: I0123 23:59:04.409820 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwwh4\" (UniqueName: \"kubernetes.io/projected/4faca075-ea7c-45a7-8e70-a805a7593117-kube-api-access-lwwh4\") pod \"whisker-6fdf9dbdcc-dnl8w\" (UID: \"4faca075-ea7c-45a7-8e70-a805a7593117\") " pod="calico-system/whisker-6fdf9dbdcc-dnl8w" Jan 23 23:59:04.410984 kubelet[3203]: I0123 23:59:04.409845 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2c7\" (UniqueName: \"kubernetes.io/projected/30575e89-2706-4309-ac97-5d65652326e6-kube-api-access-9z2c7\") pod \"coredns-668d6bf9bc-w8mnm\" (UID: \"30575e89-2706-4309-ac97-5d65652326e6\") " pod="kube-system/coredns-668d6bf9bc-w8mnm" Jan 23 23:59:04.410984 kubelet[3203]: I0123 23:59:04.409868 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4faca075-ea7c-45a7-8e70-a805a7593117-whisker-backend-key-pair\") pod \"whisker-6fdf9dbdcc-dnl8w\" (UID: \"4faca075-ea7c-45a7-8e70-a805a7593117\") " pod="calico-system/whisker-6fdf9dbdcc-dnl8w" Jan 23 23:59:04.410984 kubelet[3203]: I0123 23:59:04.409891 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js7r6\" (UniqueName: \"kubernetes.io/projected/251b4c3c-e8df-4086-8bfb-8297ee672eec-kube-api-access-js7r6\") pod \"calico-apiserver-5f88658b6c-q6dt5\" (UID: \"251b4c3c-e8df-4086-8bfb-8297ee672eec\") " pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" Jan 23 23:59:04.411179 kubelet[3203]: I0123 23:59:04.409908 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d2e99f6-dee0-4678-aa07-fbf33b420e68-config-volume\") pod \"coredns-668d6bf9bc-snw5g\" (UID: \"5d2e99f6-dee0-4678-aa07-fbf33b420e68\") " pod="kube-system/coredns-668d6bf9bc-snw5g" Jan 23 23:59:04.411179 kubelet[3203]: I0123 23:59:04.409928 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsq8t\" (UniqueName: \"kubernetes.io/projected/5d2e99f6-dee0-4678-aa07-fbf33b420e68-kube-api-access-gsq8t\") pod \"coredns-668d6bf9bc-snw5g\" (UID: \"5d2e99f6-dee0-4678-aa07-fbf33b420e68\") " pod="kube-system/coredns-668d6bf9bc-snw5g" Jan 23 23:59:04.411179 kubelet[3203]: I0123 23:59:04.410439 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtpcv\" (UniqueName: \"kubernetes.io/projected/e0b5e5a7-1acb-4d63-8673-57e3c939b318-kube-api-access-qtpcv\") pod \"calico-apiserver-674d7cd84f-5hq44\" (UID: \"e0b5e5a7-1acb-4d63-8673-57e3c939b318\") " pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" Jan 23 23:59:04.411179 kubelet[3203]: I0123 23:59:04.410471 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30575e89-2706-4309-ac97-5d65652326e6-config-volume\") pod \"coredns-668d6bf9bc-w8mnm\" (UID: \"30575e89-2706-4309-ac97-5d65652326e6\") " pod="kube-system/coredns-668d6bf9bc-w8mnm" Jan 23 23:59:04.411179 kubelet[3203]: I0123 23:59:04.410528 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4faca075-ea7c-45a7-8e70-a805a7593117-whisker-ca-bundle\") pod \"whisker-6fdf9dbdcc-dnl8w\" (UID: \"4faca075-ea7c-45a7-8e70-a805a7593117\") " pod="calico-system/whisker-6fdf9dbdcc-dnl8w" Jan 23 23:59:04.411288 kubelet[3203]: I0123 23:59:04.410550 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0b5e5a7-1acb-4d63-8673-57e3c939b318-calico-apiserver-certs\") pod \"calico-apiserver-674d7cd84f-5hq44\" (UID: \"e0b5e5a7-1acb-4d63-8673-57e3c939b318\") " pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" Jan 23 23:59:04.411288 kubelet[3203]: I0123 23:59:04.410573 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/849bc66d-ccf9-400e-bccb-fea5f90abeb0-calico-apiserver-certs\") pod \"calico-apiserver-5f88658b6c-p27j5\" (UID: \"849bc66d-ccf9-400e-bccb-fea5f90abeb0\") " pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" Jan 23 23:59:04.411288 kubelet[3203]: I0123 23:59:04.410596 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjjk\" (UniqueName: \"kubernetes.io/projected/a31be8f9-573e-4955-99b0-981cca2e99b2-kube-api-access-tmjjk\") pod \"calico-kube-controllers-6977ffbc55-s4jdp\" (UID: \"a31be8f9-573e-4955-99b0-981cca2e99b2\") " pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" Jan 23 23:59:04.411288 kubelet[3203]: I0123 23:59:04.410631 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrx8s\" (UniqueName: \"kubernetes.io/projected/849bc66d-ccf9-400e-bccb-fea5f90abeb0-kube-api-access-lrx8s\") pod \"calico-apiserver-5f88658b6c-p27j5\" (UID: \"849bc66d-ccf9-400e-bccb-fea5f90abeb0\") " pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" Jan 23 23:59:04.411288 kubelet[3203]: I0123 23:59:04.410665 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/251b4c3c-e8df-4086-8bfb-8297ee672eec-calico-apiserver-certs\") pod \"calico-apiserver-5f88658b6c-q6dt5\" (UID: \"251b4c3c-e8df-4086-8bfb-8297ee672eec\") " pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" Jan 23 23:59:04.423708 systemd[1]: Created slice kubepods-besteffort-poda31be8f9_573e_4955_99b0_981cca2e99b2.slice - libcontainer container kubepods-besteffort-poda31be8f9_573e_4955_99b0_981cca2e99b2.slice. Jan 23 23:59:04.433169 systemd[1]: Created slice kubepods-besteffort-pod849bc66d_ccf9_400e_bccb_fea5f90abeb0.slice - libcontainer container kubepods-besteffort-pod849bc66d_ccf9_400e_bccb_fea5f90abeb0.slice. Jan 23 23:59:04.442626 systemd[1]: Created slice kubepods-besteffort-pod251b4c3c_e8df_4086_8bfb_8297ee672eec.slice - libcontainer container kubepods-besteffort-pod251b4c3c_e8df_4086_8bfb_8297ee672eec.slice. Jan 23 23:59:04.450708 systemd[1]: Created slice kubepods-besteffort-pode0b5e5a7_1acb_4d63_8673_57e3c939b318.slice - libcontainer container kubepods-besteffort-pode0b5e5a7_1acb_4d63_8673_57e3c939b318.slice. Jan 23 23:59:04.461752 systemd[1]: Created slice kubepods-besteffort-pod4faca075_ea7c_45a7_8e70_a805a7593117.slice - libcontainer container kubepods-besteffort-pod4faca075_ea7c_45a7_8e70_a805a7593117.slice. Jan 23 23:59:04.468053 systemd[1]: Created slice kubepods-besteffort-pod693475f7_1f52_409e_89ad_83367b27d7ef.slice - libcontainer container kubepods-besteffort-pod693475f7_1f52_409e_89ad_83367b27d7ef.slice. Jan 23 23:59:04.511648 kubelet[3203]: I0123 23:59:04.511597 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/693475f7-1f52-409e-89ad-83367b27d7ef-goldmane-key-pair\") pod \"goldmane-666569f655-27fdn\" (UID: \"693475f7-1f52-409e-89ad-83367b27d7ef\") " pod="calico-system/goldmane-666569f655-27fdn" Jan 23 23:59:04.511648 kubelet[3203]: I0123 23:59:04.511649 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqht\" (UniqueName: \"kubernetes.io/projected/693475f7-1f52-409e-89ad-83367b27d7ef-kube-api-access-nbqht\") pod \"goldmane-666569f655-27fdn\" (UID: \"693475f7-1f52-409e-89ad-83367b27d7ef\") " pod="calico-system/goldmane-666569f655-27fdn" Jan 23 23:59:04.511845 kubelet[3203]: I0123 23:59:04.511706 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/693475f7-1f52-409e-89ad-83367b27d7ef-config\") pod \"goldmane-666569f655-27fdn\" (UID: \"693475f7-1f52-409e-89ad-83367b27d7ef\") " pod="calico-system/goldmane-666569f655-27fdn" Jan 23 23:59:04.511845 kubelet[3203]: I0123 23:59:04.511745 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/693475f7-1f52-409e-89ad-83367b27d7ef-goldmane-ca-bundle\") pod \"goldmane-666569f655-27fdn\" (UID: \"693475f7-1f52-409e-89ad-83367b27d7ef\") " pod="calico-system/goldmane-666569f655-27fdn" Jan 23 23:59:05.307660 containerd[1736]: time="2026-01-23T23:59:05.306231430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snw5g,Uid:5d2e99f6-dee0-4678-aa07-fbf33b420e68,Namespace:kube-system,Attempt:0,}" Jan 23 23:59:05.309352 containerd[1736]: time="2026-01-23T23:59:05.309314871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w8mnm,Uid:30575e89-2706-4309-ac97-5d65652326e6,Namespace:kube-system,Attempt:0,}" Jan 23 23:59:05.309696 containerd[1736]: time="2026-01-23T23:59:05.309533151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fdf9dbdcc-dnl8w,Uid:4faca075-ea7c-45a7-8e70-a805a7593117,Namespace:calico-system,Attempt:0,}" Jan 23 23:59:05.312956 containerd[1736]: time="2026-01-23T23:59:05.311288871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6977ffbc55-s4jdp,Uid:a31be8f9-573e-4955-99b0-981cca2e99b2,Namespace:calico-system,Attempt:0,}" Jan 23 23:59:05.313050 containerd[1736]: time="2026-01-23T23:59:05.312998752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-27fdn,Uid:693475f7-1f52-409e-89ad-83367b27d7ef,Namespace:calico-system,Attempt:0,}" Jan 23 23:59:05.528978 kubelet[3203]: E0123 23:59:05.528861 3203 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:59:05.528978 kubelet[3203]: E0123 23:59:05.528902 3203 projected.go:194] Error preparing data for projected volume kube-api-access-js7r6 for pod calico-apiserver/calico-apiserver-5f88658b6c-q6dt5: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:59:05.528978 kubelet[3203]: E0123 23:59:05.528981 3203 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/251b4c3c-e8df-4086-8bfb-8297ee672eec-kube-api-access-js7r6 podName:251b4c3c-e8df-4086-8bfb-8297ee672eec nodeName:}" failed. No retries permitted until 2026-01-23 23:59:06.028960792 +0000 UTC m=+33.878027874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-js7r6" (UniqueName: "kubernetes.io/projected/251b4c3c-e8df-4086-8bfb-8297ee672eec-kube-api-access-js7r6") pod "calico-apiserver-5f88658b6c-q6dt5" (UID: "251b4c3c-e8df-4086-8bfb-8297ee672eec") : failed to sync configmap cache: timed out waiting for the condition Jan 23 23:59:05.531069 kubelet[3203]: E0123 23:59:05.530981 3203 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:59:05.531069 kubelet[3203]: E0123 23:59:05.531007 3203 projected.go:194] Error preparing data for projected volume kube-api-access-qtpcv for pod calico-apiserver/calico-apiserver-674d7cd84f-5hq44: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:59:05.531069 kubelet[3203]: E0123 23:59:05.531055 3203 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0b5e5a7-1acb-4d63-8673-57e3c939b318-kube-api-access-qtpcv podName:e0b5e5a7-1acb-4d63-8673-57e3c939b318 nodeName:}" failed. No retries permitted until 2026-01-23 23:59:06.031041872 +0000 UTC m=+33.880108954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qtpcv" (UniqueName: "kubernetes.io/projected/e0b5e5a7-1acb-4d63-8673-57e3c939b318-kube-api-access-qtpcv") pod "calico-apiserver-674d7cd84f-5hq44" (UID: "e0b5e5a7-1acb-4d63-8673-57e3c939b318") : failed to sync configmap cache: timed out waiting for the condition Jan 23 23:59:05.532117 kubelet[3203]: E0123 23:59:05.532050 3203 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:59:05.532117 kubelet[3203]: E0123 23:59:05.532070 3203 projected.go:194] Error preparing data for projected volume kube-api-access-lrx8s for pod calico-apiserver/calico-apiserver-5f88658b6c-p27j5: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:59:05.532117 kubelet[3203]: E0123 23:59:05.532101 3203 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/849bc66d-ccf9-400e-bccb-fea5f90abeb0-kube-api-access-lrx8s podName:849bc66d-ccf9-400e-bccb-fea5f90abeb0 nodeName:}" failed. No retries permitted until 2026-01-23 23:59:06.032091032 +0000 UTC m=+33.881158114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lrx8s" (UniqueName: "kubernetes.io/projected/849bc66d-ccf9-400e-bccb-fea5f90abeb0-kube-api-access-lrx8s") pod "calico-apiserver-5f88658b6c-p27j5" (UID: "849bc66d-ccf9-400e-bccb-fea5f90abeb0") : failed to sync configmap cache: timed out waiting for the condition Jan 23 23:59:05.617739 containerd[1736]: time="2026-01-23T23:59:05.617621048Z" level=info msg="shim disconnected" id=9150da5715a28d64bb52a884a4e58a0fed0d3618c26ffebf10a527525eb3e46d namespace=k8s.io Jan 23 23:59:05.617739 containerd[1736]: time="2026-01-23T23:59:05.617668728Z" level=warning msg="cleaning up after shim disconnected" id=9150da5715a28d64bb52a884a4e58a0fed0d3618c26ffebf10a527525eb3e46d namespace=k8s.io Jan 23 23:59:05.617739 containerd[1736]: time="2026-01-23T23:59:05.617678208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:05.885431 containerd[1736]: time="2026-01-23T23:59:05.885275778Z" level=error msg="Failed to destroy network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.885652 containerd[1736]: time="2026-01-23T23:59:05.885573098Z" level=error msg="encountered an error cleaning up failed sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.885652 containerd[1736]: time="2026-01-23T23:59:05.885622058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fdf9dbdcc-dnl8w,Uid:4faca075-ea7c-45a7-8e70-a805a7593117,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.886098 kubelet[3203]: E0123 23:59:05.885980 3203 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.886482 kubelet[3203]: E0123 23:59:05.886040 3203 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fdf9dbdcc-dnl8w" Jan 23 23:59:05.886572 kubelet[3203]: E0123 23:59:05.886493 3203 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fdf9dbdcc-dnl8w" Jan 23 23:59:05.886572 kubelet[3203]: E0123 23:59:05.886545 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fdf9dbdcc-dnl8w_calico-system(4faca075-ea7c-45a7-8e70-a805a7593117)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fdf9dbdcc-dnl8w_calico-system(4faca075-ea7c-45a7-8e70-a805a7593117)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fdf9dbdcc-dnl8w" podUID="4faca075-ea7c-45a7-8e70-a805a7593117" Jan 23 23:59:05.892091 containerd[1736]: time="2026-01-23T23:59:05.892052859Z" level=error msg="Failed to destroy network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.892384 containerd[1736]: time="2026-01-23T23:59:05.892351899Z" level=error msg="encountered an error cleaning up failed sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.892434 containerd[1736]: time="2026-01-23T23:59:05.892408699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-27fdn,Uid:693475f7-1f52-409e-89ad-83367b27d7ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.892608 kubelet[3203]: E0123 23:59:05.892577 3203 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.892827 kubelet[3203]: E0123 23:59:05.892622 3203 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-27fdn" Jan 23 23:59:05.892827 kubelet[3203]: E0123 23:59:05.892643 3203 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-27fdn" Jan 23 23:59:05.892827 kubelet[3203]: E0123 23:59:05.892694 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-27fdn_calico-system(693475f7-1f52-409e-89ad-83367b27d7ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-27fdn_calico-system(693475f7-1f52-409e-89ad-83367b27d7ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 23 23:59:05.896860 containerd[1736]: time="2026-01-23T23:59:05.896767740Z" level=error msg="Failed to destroy network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.897906 containerd[1736]: time="2026-01-23T23:59:05.897617340Z" level=error msg="Failed to destroy network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.897994 containerd[1736]: time="2026-01-23T23:59:05.897897740Z" level=error msg="encountered an error cleaning up failed sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.897994 containerd[1736]: time="2026-01-23T23:59:05.897967700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w8mnm,Uid:30575e89-2706-4309-ac97-5d65652326e6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.898202 kubelet[3203]: E0123 23:59:05.898139 3203 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.898202 kubelet[3203]: E0123 23:59:05.898177 3203 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w8mnm" Jan 23 23:59:05.898202 kubelet[3203]: E0123 23:59:05.898191 3203 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w8mnm" Jan 23 23:59:05.898577 kubelet[3203]: E0123 23:59:05.898222 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w8mnm_kube-system(30575e89-2706-4309-ac97-5d65652326e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w8mnm_kube-system(30575e89-2706-4309-ac97-5d65652326e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w8mnm" podUID="30575e89-2706-4309-ac97-5d65652326e6" Jan 23 23:59:05.899201 containerd[1736]: time="2026-01-23T23:59:05.899072980Z" level=error msg="encountered an error cleaning up failed sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.899201 containerd[1736]: time="2026-01-23T23:59:05.899120100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6977ffbc55-s4jdp,Uid:a31be8f9-573e-4955-99b0-981cca2e99b2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.899549 kubelet[3203]: E0123 23:59:05.899428 3203 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.899874 kubelet[3203]: E0123 23:59:05.899559 3203 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" Jan 23 23:59:05.899874 kubelet[3203]: E0123 23:59:05.899578 3203 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" Jan 23 23:59:05.899874 kubelet[3203]: E0123 23:59:05.899613 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6977ffbc55-s4jdp_calico-system(a31be8f9-573e-4955-99b0-981cca2e99b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6977ffbc55-s4jdp_calico-system(a31be8f9-573e-4955-99b0-981cca2e99b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 23 23:59:05.901523 containerd[1736]: time="2026-01-23T23:59:05.901490701Z" level=error msg="Failed to destroy network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.901784 containerd[1736]: time="2026-01-23T23:59:05.901756661Z" level=error msg="encountered an error cleaning up failed sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.901822 containerd[1736]: time="2026-01-23T23:59:05.901797501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snw5g,Uid:5d2e99f6-dee0-4678-aa07-fbf33b420e68,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.902095 kubelet[3203]: E0123 23:59:05.901968 3203 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:05.902095 kubelet[3203]: E0123 23:59:05.902003 3203 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-snw5g" Jan 23 23:59:05.902095 kubelet[3203]: E0123 23:59:05.902020 3203 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-snw5g" Jan 23 23:59:05.902213 kubelet[3203]: E0123 23:59:05.902065 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-snw5g_kube-system(5d2e99f6-dee0-4678-aa07-fbf33b420e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-snw5g_kube-system(5d2e99f6-dee0-4678-aa07-fbf33b420e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-snw5g" podUID="5d2e99f6-dee0-4678-aa07-fbf33b420e68" Jan 23 23:59:06.122737 containerd[1736]: time="2026-01-23T23:59:06.122656062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88658b6c-q6dt5,Uid:251b4c3c-e8df-4086-8bfb-8297ee672eec,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:59:06.193197 containerd[1736]: time="2026-01-23T23:59:06.192662075Z" level=error msg="Failed to destroy network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.193197 containerd[1736]: time="2026-01-23T23:59:06.193000795Z" level=error msg="encountered an error cleaning up failed sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.193197 containerd[1736]: time="2026-01-23T23:59:06.193053995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88658b6c-q6dt5,Uid:251b4c3c-e8df-4086-8bfb-8297ee672eec,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.194213 kubelet[3203]: E0123 23:59:06.194171 3203 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.194289 kubelet[3203]: E0123 23:59:06.194230 3203 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" Jan 23 23:59:06.194289 kubelet[3203]: E0123 23:59:06.194255 3203 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" Jan 23 23:59:06.194346 kubelet[3203]: E0123 23:59:06.194294 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f88658b6c-q6dt5_calico-apiserver(251b4c3c-e8df-4086-8bfb-8297ee672eec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f88658b6c-q6dt5_calico-apiserver(251b4c3c-e8df-4086-8bfb-8297ee672eec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 23 23:59:06.274764 systemd[1]: Created slice kubepods-besteffort-pod89876e47_5c25_4ed8_975b_aadadd46d2c9.slice - libcontainer container kubepods-besteffort-pod89876e47_5c25_4ed8_975b_aadadd46d2c9.slice. Jan 23 23:59:06.276844 containerd[1736]: time="2026-01-23T23:59:06.276809250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phrmd,Uid:89876e47-5c25-4ed8-975b-aadadd46d2c9,Namespace:calico-system,Attempt:0,}" Jan 23 23:59:06.335657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0-shm.mount: Deactivated successfully. Jan 23 23:59:06.367180 containerd[1736]: time="2026-01-23T23:59:06.367131267Z" level=error msg="Failed to destroy network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.369520 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a-shm.mount: Deactivated successfully. Jan 23 23:59:06.370989 containerd[1736]: time="2026-01-23T23:59:06.370915908Z" level=error msg="encountered an error cleaning up failed sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.371350 containerd[1736]: time="2026-01-23T23:59:06.371276988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phrmd,Uid:89876e47-5c25-4ed8-975b-aadadd46d2c9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.371614 kubelet[3203]: E0123 23:59:06.371489 3203 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.371614 kubelet[3203]: E0123 23:59:06.371526 3203 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-phrmd" Jan 23 23:59:06.371614 kubelet[3203]: E0123 23:59:06.371551 3203 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-phrmd" Jan 23 23:59:06.371772 kubelet[3203]: E0123 23:59:06.371593 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:06.375927 kubelet[3203]: I0123 23:59:06.375891 3203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:06.377018 containerd[1736]: time="2026-01-23T23:59:06.376338549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:59:06.377018 containerd[1736]: time="2026-01-23T23:59:06.376408709Z" level=info msg="StopPodSandbox for \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\"" Jan 23 23:59:06.377018 containerd[1736]: time="2026-01-23T23:59:06.376552109Z" level=info msg="Ensure that sandbox ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b in task-service has been cleanup successfully" Jan 23 23:59:06.379042 kubelet[3203]: I0123 23:59:06.378982 3203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:06.380646 containerd[1736]: time="2026-01-23T23:59:06.379986549Z" level=info msg="StopPodSandbox for \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\"" Jan 23 23:59:06.380646 containerd[1736]: time="2026-01-23T23:59:06.380136469Z" level=info msg="Ensure that sandbox 31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0 in task-service has been cleanup successfully" Jan 23 23:59:06.392786 kubelet[3203]: I0123 23:59:06.390676 3203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:06.393786 containerd[1736]: time="2026-01-23T23:59:06.393600992Z" level=info msg="StopPodSandbox for \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\"" Jan 23 23:59:06.394785 containerd[1736]: time="2026-01-23T23:59:06.394621632Z" level=info msg="Ensure that sandbox bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97 in task-service has been cleanup successfully" Jan 23 23:59:06.401632 kubelet[3203]: I0123 23:59:06.401611 3203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:06.403970 containerd[1736]: time="2026-01-23T23:59:06.403224314Z" level=info msg="StopPodSandbox for \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\"" Jan 23 23:59:06.403970 containerd[1736]: time="2026-01-23T23:59:06.403371714Z" level=info msg="Ensure that sandbox abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766 in task-service has been cleanup successfully" Jan 23 23:59:06.412622 kubelet[3203]: I0123 23:59:06.412593 3203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:06.417900 containerd[1736]: time="2026-01-23T23:59:06.417826716Z" level=info msg="StopPodSandbox for \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\"" Jan 23 23:59:06.418964 containerd[1736]: time="2026-01-23T23:59:06.418609276Z" level=info msg="Ensure that sandbox b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19 in task-service has been cleanup successfully" Jan 23 23:59:06.426224 containerd[1736]: time="2026-01-23T23:59:06.426196398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674d7cd84f-5hq44,Uid:e0b5e5a7-1acb-4d63-8673-57e3c939b318,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:59:06.426932 containerd[1736]: time="2026-01-23T23:59:06.426896278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88658b6c-p27j5,Uid:849bc66d-ccf9-400e-bccb-fea5f90abeb0,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:59:06.430480 kubelet[3203]: I0123 23:59:06.429869 3203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:06.437227 containerd[1736]: time="2026-01-23T23:59:06.437198680Z" level=info msg="StopPodSandbox for \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\"" Jan 23 23:59:06.441217 containerd[1736]: time="2026-01-23T23:59:06.441184521Z" level=info msg="Ensure that sandbox 728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878 in task-service has been cleanup successfully" Jan 23 23:59:06.454308 containerd[1736]: time="2026-01-23T23:59:06.454184403Z" level=error msg="StopPodSandbox for \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\" failed" error="failed to destroy network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.454648 kubelet[3203]: E0123 23:59:06.454560 3203 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:06.454800 kubelet[3203]: E0123 23:59:06.454638 3203 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0"} Jan 23 23:59:06.454800 kubelet[3203]: E0123 23:59:06.454777 3203 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"693475f7-1f52-409e-89ad-83367b27d7ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:59:06.455236 kubelet[3203]: E0123 23:59:06.455044 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"693475f7-1f52-409e-89ad-83367b27d7ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 23 23:59:06.463818 containerd[1736]: time="2026-01-23T23:59:06.463744525Z" level=error msg="StopPodSandbox for \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\" failed" error="failed to destroy network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.465085 kubelet[3203]: E0123 23:59:06.463955 3203 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:06.465085 kubelet[3203]: E0123 23:59:06.463999 3203 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b"} Jan 23 23:59:06.465085 kubelet[3203]: E0123 23:59:06.464031 3203 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"251b4c3c-e8df-4086-8bfb-8297ee672eec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:59:06.465085 kubelet[3203]: E0123 23:59:06.464052 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"251b4c3c-e8df-4086-8bfb-8297ee672eec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 23 23:59:06.493444 containerd[1736]: time="2026-01-23T23:59:06.493402370Z" level=error msg="StopPodSandbox for \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\" failed" error="failed to destroy network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.493665 containerd[1736]: time="2026-01-23T23:59:06.493551570Z" level=error msg="StopPodSandbox for \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\" failed" error="failed to destroy network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.494047 kubelet[3203]: E0123 23:59:06.493781 3203 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:06.494047 kubelet[3203]: E0123 23:59:06.493825 3203 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97"} Jan 23 23:59:06.494047 kubelet[3203]: E0123 23:59:06.493859 3203 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4faca075-ea7c-45a7-8e70-a805a7593117\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:59:06.494047 kubelet[3203]: E0123 23:59:06.493879 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4faca075-ea7c-45a7-8e70-a805a7593117\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fdf9dbdcc-dnl8w" podUID="4faca075-ea7c-45a7-8e70-a805a7593117" Jan 23 23:59:06.494483 kubelet[3203]: E0123 23:59:06.494362 3203 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:06.494483 kubelet[3203]: E0123 23:59:06.494416 3203 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766"} Jan 23 23:59:06.494483 kubelet[3203]: E0123 23:59:06.494440 3203 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30575e89-2706-4309-ac97-5d65652326e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:59:06.494483 kubelet[3203]: E0123 23:59:06.494457 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30575e89-2706-4309-ac97-5d65652326e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w8mnm" podUID="30575e89-2706-4309-ac97-5d65652326e6" Jan 23 23:59:06.512158 containerd[1736]: time="2026-01-23T23:59:06.512118093Z" level=error msg="StopPodSandbox for \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\" failed" error="failed to destroy network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.512664 kubelet[3203]: E0123 23:59:06.512452 3203 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:06.512664 kubelet[3203]: E0123 23:59:06.512508 3203 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878"} Jan 23 23:59:06.512664 kubelet[3203]: E0123 23:59:06.512538 3203 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d2e99f6-dee0-4678-aa07-fbf33b420e68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:59:06.512664 kubelet[3203]: E0123 23:59:06.512558 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d2e99f6-dee0-4678-aa07-fbf33b420e68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-snw5g" podUID="5d2e99f6-dee0-4678-aa07-fbf33b420e68" Jan 23 23:59:06.522536 containerd[1736]: time="2026-01-23T23:59:06.522488055Z" level=error msg="StopPodSandbox for \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\" failed" error="failed to destroy network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.524086 kubelet[3203]: E0123 23:59:06.524013 3203 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:06.524546 kubelet[3203]: E0123 23:59:06.524445 3203 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19"} Jan 23 23:59:06.524546 kubelet[3203]: E0123 23:59:06.524498 3203 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a31be8f9-573e-4955-99b0-981cca2e99b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:59:06.524546 kubelet[3203]: E0123 23:59:06.524521 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a31be8f9-573e-4955-99b0-981cca2e99b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 23 23:59:06.576236 containerd[1736]: time="2026-01-23T23:59:06.576183225Z" level=error msg="Failed to destroy network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.576840 containerd[1736]: time="2026-01-23T23:59:06.576773705Z" level=error msg="encountered an error cleaning up failed sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.576897 containerd[1736]: time="2026-01-23T23:59:06.576827905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674d7cd84f-5hq44,Uid:e0b5e5a7-1acb-4d63-8673-57e3c939b318,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.577530 kubelet[3203]: E0123 23:59:06.577070 3203 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.577530 kubelet[3203]: E0123 23:59:06.577137 3203 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" Jan 23 23:59:06.577530 kubelet[3203]: E0123 23:59:06.577156 3203 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" Jan 23 23:59:06.577830 kubelet[3203]: E0123 23:59:06.577193 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-674d7cd84f-5hq44_calico-apiserver(e0b5e5a7-1acb-4d63-8673-57e3c939b318)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-674d7cd84f-5hq44_calico-apiserver(e0b5e5a7-1acb-4d63-8673-57e3c939b318)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 23 23:59:06.590057 containerd[1736]: time="2026-01-23T23:59:06.589999547Z" level=error msg="Failed to destroy network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.590341 containerd[1736]: time="2026-01-23T23:59:06.590314467Z" level=error msg="encountered an error cleaning up failed sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.590394 containerd[1736]: time="2026-01-23T23:59:06.590365987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88658b6c-p27j5,Uid:849bc66d-ccf9-400e-bccb-fea5f90abeb0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.590585 kubelet[3203]: E0123 23:59:06.590552 3203 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:06.590631 kubelet[3203]: E0123 23:59:06.590604 3203 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" Jan 23 23:59:06.590631 kubelet[3203]: E0123 23:59:06.590623 3203 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" Jan 23 23:59:06.590688 kubelet[3203]: E0123 23:59:06.590664 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f88658b6c-p27j5_calico-apiserver(849bc66d-ccf9-400e-bccb-fea5f90abeb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f88658b6c-p27j5_calico-apiserver(849bc66d-ccf9-400e-bccb-fea5f90abeb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 23 23:59:07.323895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4-shm.mount: Deactivated successfully. Jan 23 23:59:07.324044 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b-shm.mount: Deactivated successfully. Jan 23 23:59:07.437798 kubelet[3203]: I0123 23:59:07.437171 3203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:07.438426 containerd[1736]: time="2026-01-23T23:59:07.438168058Z" level=info msg="StopPodSandbox for \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\"" Jan 23 23:59:07.439067 containerd[1736]: time="2026-01-23T23:59:07.438406018Z" level=info msg="Ensure that sandbox df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a in task-service has been cleanup successfully" Jan 23 23:59:07.443399 kubelet[3203]: I0123 23:59:07.441227 3203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:07.443523 containerd[1736]: time="2026-01-23T23:59:07.442304339Z" level=info msg="StopPodSandbox for \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\"" Jan 23 23:59:07.443523 containerd[1736]: time="2026-01-23T23:59:07.442517939Z" level=info msg="Ensure that sandbox f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4 in task-service has been cleanup successfully" Jan 23 23:59:07.445559 kubelet[3203]: I0123 23:59:07.445503 3203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:07.447093 containerd[1736]: time="2026-01-23T23:59:07.447065299Z" level=info msg="StopPodSandbox for \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\"" Jan 23 23:59:07.447697 containerd[1736]: time="2026-01-23T23:59:07.447663339Z" level=info msg="Ensure that sandbox 8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b in task-service has been cleanup successfully" Jan 23 23:59:07.488115 containerd[1736]: time="2026-01-23T23:59:07.488065827Z" level=error msg="StopPodSandbox for \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\" failed" error="failed to destroy network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:07.488465 kubelet[3203]: E0123 23:59:07.488432 3203 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:07.488597 kubelet[3203]: E0123 23:59:07.488577 3203 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a"} Jan 23 23:59:07.488724 kubelet[3203]: E0123 23:59:07.488701 3203 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89876e47-5c25-4ed8-975b-aadadd46d2c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:59:07.488924 kubelet[3203]: E0123 23:59:07.488904 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89876e47-5c25-4ed8-975b-aadadd46d2c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:07.499216 containerd[1736]: time="2026-01-23T23:59:07.499055869Z" level=error msg="StopPodSandbox for \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\" failed" error="failed to destroy network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:07.499573 kubelet[3203]: E0123 23:59:07.499532 3203 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:07.499721 kubelet[3203]: E0123 23:59:07.499586 3203 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4"} Jan 23 23:59:07.499721 kubelet[3203]: E0123 23:59:07.499617 3203 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"849bc66d-ccf9-400e-bccb-fea5f90abeb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:59:07.499721 kubelet[3203]: E0123 23:59:07.499641 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"849bc66d-ccf9-400e-bccb-fea5f90abeb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 23 23:59:07.500569 containerd[1736]: time="2026-01-23T23:59:07.500278909Z" level=error msg="StopPodSandbox for \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\" failed" error="failed to destroy network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:59:07.502113 kubelet[3203]: E0123 23:59:07.500885 3203 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:07.502113 kubelet[3203]: E0123 23:59:07.500918 3203 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b"} Jan 23 23:59:07.502113 kubelet[3203]: E0123 23:59:07.501974 3203 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0b5e5a7-1acb-4d63-8673-57e3c939b318\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:59:07.502113 kubelet[3203]: E0123 23:59:07.502006 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0b5e5a7-1acb-4d63-8673-57e3c939b318\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 23 23:59:10.570157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641032397.mount: Deactivated successfully. Jan 23 23:59:11.175511 containerd[1736]: time="2026-01-23T23:59:11.174842442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:11.179457 containerd[1736]: time="2026-01-23T23:59:11.179424682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 23:59:11.183794 containerd[1736]: time="2026-01-23T23:59:11.183719843Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:11.188802 containerd[1736]: time="2026-01-23T23:59:11.188747844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:59:11.189524 containerd[1736]: time="2026-01-23T23:59:11.189494924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.813120735s" Jan 23 23:59:11.189574 containerd[1736]: time="2026-01-23T23:59:11.189530844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:59:11.203835 containerd[1736]: time="2026-01-23T23:59:11.203724327Z" level=info msg="CreateContainer within sandbox \"38a587c3a63a368f6d57ee151c5371086b3bd3d60ea7221eadbf7bcc276f3e9d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:59:11.255920 containerd[1736]: time="2026-01-23T23:59:11.255875376Z" level=info msg="CreateContainer within sandbox \"38a587c3a63a368f6d57ee151c5371086b3bd3d60ea7221eadbf7bcc276f3e9d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1ecda2ef57428b6bc97ef0d1910419e24a8b97a663aa96c68c834597197dd0e8\"" Jan 23 23:59:11.256526 containerd[1736]: time="2026-01-23T23:59:11.256498696Z" level=info msg="StartContainer for \"1ecda2ef57428b6bc97ef0d1910419e24a8b97a663aa96c68c834597197dd0e8\"" Jan 23 23:59:11.283109 systemd[1]: Started cri-containerd-1ecda2ef57428b6bc97ef0d1910419e24a8b97a663aa96c68c834597197dd0e8.scope - libcontainer container 1ecda2ef57428b6bc97ef0d1910419e24a8b97a663aa96c68c834597197dd0e8. Jan 23 23:59:11.318932 containerd[1736]: time="2026-01-23T23:59:11.318563467Z" level=info msg="StartContainer for \"1ecda2ef57428b6bc97ef0d1910419e24a8b97a663aa96c68c834597197dd0e8\" returns successfully" Jan 23 23:59:11.600631 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:59:11.600748 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:59:11.741909 kubelet[3203]: I0123 23:59:11.741838 3203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lxp5v" podStartSLOduration=1.662434174 podStartE2EDuration="15.741210662s" podCreationTimestamp="2026-01-23 23:58:56 +0000 UTC" firstStartedPulling="2026-01-23 23:58:57.111571836 +0000 UTC m=+24.960638878" lastFinishedPulling="2026-01-23 23:59:11.190348284 +0000 UTC m=+39.039415366" observedRunningTime="2026-01-23 23:59:11.497629059 +0000 UTC m=+39.346696141" watchObservedRunningTime="2026-01-23 23:59:11.741210662 +0000 UTC m=+39.590277744" Jan 23 23:59:11.743844 containerd[1736]: time="2026-01-23T23:59:11.743572783Z" level=info msg="StopPodSandbox for \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\"" Jan 23 23:59:11.786655 kubelet[3203]: I0123 23:59:11.786583 3203 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.848 [INFO][4412] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.849 [INFO][4412] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" iface="eth0" netns="/var/run/netns/cni-453b7871-eea7-08df-5197-a9ce9d7cef9e" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.850 [INFO][4412] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" iface="eth0" netns="/var/run/netns/cni-453b7871-eea7-08df-5197-a9ce9d7cef9e" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.850 [INFO][4412] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" iface="eth0" netns="/var/run/netns/cni-453b7871-eea7-08df-5197-a9ce9d7cef9e" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.850 [INFO][4412] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.850 [INFO][4412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.895 [INFO][4423] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" HandleID="k8s-pod-network.bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.895 [INFO][4423] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.895 [INFO][4423] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.908 [WARNING][4423] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" HandleID="k8s-pod-network.bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.908 [INFO][4423] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" HandleID="k8s-pod-network.bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.911 [INFO][4423] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:11.915966 containerd[1736]: 2026-01-23 23:59:11.914 [INFO][4412] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:11.917044 containerd[1736]: time="2026-01-23T23:59:11.916405093Z" level=info msg="TearDown network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\" successfully" Jan 23 23:59:11.917044 containerd[1736]: time="2026-01-23T23:59:11.916432613Z" level=info msg="StopPodSandbox for \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\" returns successfully" Jan 23 23:59:11.919161 systemd[1]: run-netns-cni\x2d453b7871\x2deea7\x2d08df\x2d5197\x2da9ce9d7cef9e.mount: Deactivated successfully. Jan 23 23:59:11.966693 kubelet[3203]: I0123 23:59:11.966279 3203 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4faca075-ea7c-45a7-8e70-a805a7593117-whisker-ca-bundle\") pod \"4faca075-ea7c-45a7-8e70-a805a7593117\" (UID: \"4faca075-ea7c-45a7-8e70-a805a7593117\") " Jan 23 23:59:11.966693 kubelet[3203]: I0123 23:59:11.966347 3203 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4faca075-ea7c-45a7-8e70-a805a7593117-whisker-backend-key-pair\") pod \"4faca075-ea7c-45a7-8e70-a805a7593117\" (UID: \"4faca075-ea7c-45a7-8e70-a805a7593117\") " Jan 23 23:59:11.966693 kubelet[3203]: I0123 23:59:11.966382 3203 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwwh4\" (UniqueName: \"kubernetes.io/projected/4faca075-ea7c-45a7-8e70-a805a7593117-kube-api-access-lwwh4\") pod \"4faca075-ea7c-45a7-8e70-a805a7593117\" (UID: \"4faca075-ea7c-45a7-8e70-a805a7593117\") " Jan 23 23:59:11.966693 kubelet[3203]: I0123 23:59:11.966646 3203 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4faca075-ea7c-45a7-8e70-a805a7593117-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4faca075-ea7c-45a7-8e70-a805a7593117" (UID: "4faca075-ea7c-45a7-8e70-a805a7593117"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:59:11.971551 systemd[1]: var-lib-kubelet-pods-4faca075\x2dea7c\x2d45a7\x2d8e70\x2da805a7593117-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlwwh4.mount: Deactivated successfully. Jan 23 23:59:11.971698 kubelet[3203]: I0123 23:59:11.971532 3203 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4faca075-ea7c-45a7-8e70-a805a7593117-kube-api-access-lwwh4" (OuterVolumeSpecName: "kube-api-access-lwwh4") pod "4faca075-ea7c-45a7-8e70-a805a7593117" (UID: "4faca075-ea7c-45a7-8e70-a805a7593117"). InnerVolumeSpecName "kube-api-access-lwwh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:59:11.976604 kubelet[3203]: I0123 23:59:11.976572 3203 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4faca075-ea7c-45a7-8e70-a805a7593117-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4faca075-ea7c-45a7-8e70-a805a7593117" (UID: "4faca075-ea7c-45a7-8e70-a805a7593117"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:59:11.978634 systemd[1]: var-lib-kubelet-pods-4faca075\x2dea7c\x2d45a7\x2d8e70\x2da805a7593117-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 23:59:12.067690 kubelet[3203]: I0123 23:59:12.067625 3203 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4faca075-ea7c-45a7-8e70-a805a7593117-whisker-ca-bundle\") on node \"ci-4081.3.6-n-95a9bf6543\" DevicePath \"\"" Jan 23 23:59:12.067690 kubelet[3203]: I0123 23:59:12.067659 3203 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4faca075-ea7c-45a7-8e70-a805a7593117-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-95a9bf6543\" DevicePath \"\"" Jan 23 23:59:12.067690 kubelet[3203]: I0123 23:59:12.067671 3203 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lwwh4\" (UniqueName: \"kubernetes.io/projected/4faca075-ea7c-45a7-8e70-a805a7593117-kube-api-access-lwwh4\") on node \"ci-4081.3.6-n-95a9bf6543\" DevicePath \"\"" Jan 23 23:59:12.273177 systemd[1]: Removed slice kubepods-besteffort-pod4faca075_ea7c_45a7_8e70_a805a7593117.slice - libcontainer container kubepods-besteffort-pod4faca075_ea7c_45a7_8e70_a805a7593117.slice. Jan 23 23:59:12.551192 systemd[1]: Created slice kubepods-besteffort-pod8ee41f25_89f1_4519_b99e_33fdb651ce3d.slice - libcontainer container kubepods-besteffort-pod8ee41f25_89f1_4519_b99e_33fdb651ce3d.slice. Jan 23 23:59:12.571978 kubelet[3203]: I0123 23:59:12.571187 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsm26\" (UniqueName: \"kubernetes.io/projected/8ee41f25-89f1-4519-b99e-33fdb651ce3d-kube-api-access-dsm26\") pod \"whisker-787b66fb85-crtpt\" (UID: \"8ee41f25-89f1-4519-b99e-33fdb651ce3d\") " pod="calico-system/whisker-787b66fb85-crtpt" Jan 23 23:59:12.571978 kubelet[3203]: I0123 23:59:12.571234 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ee41f25-89f1-4519-b99e-33fdb651ce3d-whisker-ca-bundle\") pod \"whisker-787b66fb85-crtpt\" (UID: \"8ee41f25-89f1-4519-b99e-33fdb651ce3d\") " pod="calico-system/whisker-787b66fb85-crtpt" Jan 23 23:59:12.571978 kubelet[3203]: I0123 23:59:12.571261 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ee41f25-89f1-4519-b99e-33fdb651ce3d-whisker-backend-key-pair\") pod \"whisker-787b66fb85-crtpt\" (UID: \"8ee41f25-89f1-4519-b99e-33fdb651ce3d\") " pod="calico-system/whisker-787b66fb85-crtpt" Jan 23 23:59:12.856286 containerd[1736]: time="2026-01-23T23:59:12.856039020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787b66fb85-crtpt,Uid:8ee41f25-89f1-4519-b99e-33fdb651ce3d,Namespace:calico-system,Attempt:0,}" Jan 23 23:59:13.022827 systemd-networkd[1364]: calib6259033efb: Link UP Jan 23 23:59:13.023055 systemd-networkd[1364]: calib6259033efb: Gained carrier Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.926 [INFO][4445] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.938 [INFO][4445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0 whisker-787b66fb85- calico-system 8ee41f25-89f1-4519-b99e-33fdb651ce3d 938 0 2026-01-23 23:59:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:787b66fb85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-95a9bf6543 whisker-787b66fb85-crtpt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib6259033efb [] [] }} ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Namespace="calico-system" Pod="whisker-787b66fb85-crtpt" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.938 [INFO][4445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Namespace="calico-system" Pod="whisker-787b66fb85-crtpt" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.962 [INFO][4458] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" HandleID="k8s-pod-network.a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.962 [INFO][4458] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" HandleID="k8s-pod-network.a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-95a9bf6543", "pod":"whisker-787b66fb85-crtpt", "timestamp":"2026-01-23 23:59:12.962120159 +0000 UTC"}, Hostname:"ci-4081.3.6-n-95a9bf6543", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.962 [INFO][4458] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.962 [INFO][4458] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.962 [INFO][4458] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-95a9bf6543' Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.970 [INFO][4458] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.975 [INFO][4458] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.978 [INFO][4458] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.979 [INFO][4458] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.981 [INFO][4458] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.981 [INFO][4458] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.982 [INFO][4458] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3 Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.989 [INFO][4458] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.994 [INFO][4458] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.129/26] block=192.168.91.128/26 handle="k8s-pod-network.a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.994 [INFO][4458] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.129/26] handle="k8s-pod-network.a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.994 [INFO][4458] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:13.042678 containerd[1736]: 2026-01-23 23:59:12.994 [INFO][4458] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.129/26] IPv6=[] ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" HandleID="k8s-pod-network.a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" Jan 23 23:59:13.043301 containerd[1736]: 2026-01-23 23:59:12.996 [INFO][4445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Namespace="calico-system" Pod="whisker-787b66fb85-crtpt" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0", GenerateName:"whisker-787b66fb85-", Namespace:"calico-system", SelfLink:"", UID:"8ee41f25-89f1-4519-b99e-33fdb651ce3d", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787b66fb85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"", Pod:"whisker-787b66fb85-crtpt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib6259033efb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:13.043301 containerd[1736]: 2026-01-23 23:59:12.996 [INFO][4445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.129/32] ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Namespace="calico-system" Pod="whisker-787b66fb85-crtpt" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" Jan 23 23:59:13.043301 containerd[1736]: 2026-01-23 23:59:12.996 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6259033efb ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Namespace="calico-system" Pod="whisker-787b66fb85-crtpt" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" Jan 23 23:59:13.043301 containerd[1736]: 2026-01-23 23:59:13.022 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Namespace="calico-system" Pod="whisker-787b66fb85-crtpt" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" Jan 23 23:59:13.043301 containerd[1736]: 2026-01-23 23:59:13.022 [INFO][4445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Namespace="calico-system" Pod="whisker-787b66fb85-crtpt" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0", GenerateName:"whisker-787b66fb85-", Namespace:"calico-system", SelfLink:"", UID:"8ee41f25-89f1-4519-b99e-33fdb651ce3d", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 59, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787b66fb85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3", Pod:"whisker-787b66fb85-crtpt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib6259033efb", MAC:"02:ef:39:2c:65:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:13.043301 containerd[1736]: 2026-01-23 23:59:13.039 [INFO][4445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3" Namespace="calico-system" Pod="whisker-787b66fb85-crtpt" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--787b66fb85--crtpt-eth0" Jan 23 23:59:13.078002 containerd[1736]: time="2026-01-23T23:59:13.077889380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:13.078002 containerd[1736]: time="2026-01-23T23:59:13.077952220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:13.078002 containerd[1736]: time="2026-01-23T23:59:13.077977500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:13.079293 containerd[1736]: time="2026-01-23T23:59:13.078923340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:13.108516 systemd[1]: Started cri-containerd-a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3.scope - libcontainer container a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3. Jan 23 23:59:13.183773 containerd[1736]: time="2026-01-23T23:59:13.183719519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787b66fb85-crtpt,Uid:8ee41f25-89f1-4519-b99e-33fdb651ce3d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6430a1cb8e07be227b8e8287e4ae149aaab6761d5392ce12c8644fdd9e316c3\"" Jan 23 23:59:13.187837 containerd[1736]: time="2026-01-23T23:59:13.187718879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:59:13.482976 kernel: bpftool[4629]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 23 23:59:13.578301 containerd[1736]: time="2026-01-23T23:59:13.578120149Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:13.580885 containerd[1736]: time="2026-01-23T23:59:13.580730549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:59:13.580885 containerd[1736]: time="2026-01-23T23:59:13.580854709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:59:13.581338 kubelet[3203]: E0123 23:59:13.581122 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:59:13.581338 kubelet[3203]: E0123 23:59:13.581167 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:59:13.582660 kubelet[3203]: E0123 23:59:13.582595 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7e3e9b460e424236a2b5a2375c5d7b77,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787b66fb85-crtpt_calico-system(8ee41f25-89f1-4519-b99e-33fdb651ce3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:13.584940 containerd[1736]: time="2026-01-23T23:59:13.584870230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:59:13.702154 systemd-networkd[1364]: vxlan.calico: Link UP Jan 23 23:59:13.702163 systemd-networkd[1364]: vxlan.calico: Gained carrier Jan 23 23:59:13.849637 containerd[1736]: time="2026-01-23T23:59:13.849497197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:13.860039 containerd[1736]: time="2026-01-23T23:59:13.859972279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:59:13.860391 containerd[1736]: time="2026-01-23T23:59:13.860008599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:59:13.860430 kubelet[3203]: E0123 23:59:13.860203 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:59:13.860430 kubelet[3203]: E0123 23:59:13.860244 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:59:13.860492 kubelet[3203]: E0123 23:59:13.860345 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787b66fb85-crtpt_calico-system(8ee41f25-89f1-4519-b99e-33fdb651ce3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:13.862063 kubelet[3203]: E0123 23:59:13.862028 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 23 23:59:14.269820 kubelet[3203]: I0123 23:59:14.269643 3203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4faca075-ea7c-45a7-8e70-a805a7593117" path="/var/lib/kubelet/pods/4faca075-ea7c-45a7-8e70-a805a7593117/volumes" Jan 23 23:59:14.488439 kubelet[3203]: E0123 23:59:14.488356 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 23 23:59:14.709216 systemd-networkd[1364]: calib6259033efb: Gained IPv6LL Jan 23 23:59:14.837056 systemd-networkd[1364]: vxlan.calico: Gained IPv6LL Jan 23 23:59:17.268000 containerd[1736]: time="2026-01-23T23:59:17.267771431Z" level=info msg="StopPodSandbox for \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\"" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.325 [INFO][4717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.326 [INFO][4717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" iface="eth0" netns="/var/run/netns/cni-b3dfd3c9-8b65-61bd-8119-0b03cde480aa" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.327 [INFO][4717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" iface="eth0" netns="/var/run/netns/cni-b3dfd3c9-8b65-61bd-8119-0b03cde480aa" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.327 [INFO][4717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" iface="eth0" netns="/var/run/netns/cni-b3dfd3c9-8b65-61bd-8119-0b03cde480aa" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.327 [INFO][4717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.327 [INFO][4717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.345 [INFO][4725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" HandleID="k8s-pod-network.ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.345 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.345 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.353 [WARNING][4725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" HandleID="k8s-pod-network.ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.353 [INFO][4725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" HandleID="k8s-pod-network.ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.355 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:17.358611 containerd[1736]: 2026-01-23 23:59:17.356 [INFO][4717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:17.361013 containerd[1736]: time="2026-01-23T23:59:17.359127650Z" level=info msg="TearDown network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\" successfully" Jan 23 23:59:17.361013 containerd[1736]: time="2026-01-23T23:59:17.359166930Z" level=info msg="StopPodSandbox for \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\" returns successfully" Jan 23 23:59:17.361830 containerd[1736]: time="2026-01-23T23:59:17.361502090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88658b6c-q6dt5,Uid:251b4c3c-e8df-4086-8bfb-8297ee672eec,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:59:17.362365 systemd[1]: run-netns-cni\x2db3dfd3c9\x2d8b65\x2d61bd\x2d8119\x2d0b03cde480aa.mount: Deactivated successfully. Jan 23 23:59:17.499800 systemd-networkd[1364]: calibed1104d4c8: Link UP Jan 23 23:59:17.500635 systemd-networkd[1364]: calibed1104d4c8: Gained carrier Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.435 [INFO][4732] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0 calico-apiserver-5f88658b6c- calico-apiserver 251b4c3c-e8df-4086-8bfb-8297ee672eec 967 0 2026-01-23 23:58:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f88658b6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-95a9bf6543 calico-apiserver-5f88658b6c-q6dt5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibed1104d4c8 [] [] }} ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-q6dt5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.435 [INFO][4732] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-q6dt5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.458 [INFO][4743] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" HandleID="k8s-pod-network.9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.459 [INFO][4743] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" HandleID="k8s-pod-network.9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-95a9bf6543", "pod":"calico-apiserver-5f88658b6c-q6dt5", "timestamp":"2026-01-23 23:59:17.45877587 +0000 UTC"}, Hostname:"ci-4081.3.6-n-95a9bf6543", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.459 [INFO][4743] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.459 [INFO][4743] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.459 [INFO][4743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-95a9bf6543' Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.468 [INFO][4743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.472 [INFO][4743] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.475 [INFO][4743] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.477 [INFO][4743] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.479 [INFO][4743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.479 [INFO][4743] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.480 [INFO][4743] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.488 [INFO][4743] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.493 [INFO][4743] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.130/26] block=192.168.91.128/26 handle="k8s-pod-network.9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.494 [INFO][4743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.130/26] handle="k8s-pod-network.9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.494 [INFO][4743] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:17.518771 containerd[1736]: 2026-01-23 23:59:17.494 [INFO][4743] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.130/26] IPv6=[] ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" HandleID="k8s-pod-network.9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.519313 containerd[1736]: 2026-01-23 23:59:17.496 [INFO][4732] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-q6dt5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0", GenerateName:"calico-apiserver-5f88658b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"251b4c3c-e8df-4086-8bfb-8297ee672eec", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88658b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"", Pod:"calico-apiserver-5f88658b6c-q6dt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibed1104d4c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:17.519313 containerd[1736]: 2026-01-23 23:59:17.496 [INFO][4732] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.130/32] ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-q6dt5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.519313 containerd[1736]: 2026-01-23 23:59:17.496 [INFO][4732] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibed1104d4c8 ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-q6dt5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.519313 containerd[1736]: 2026-01-23 23:59:17.501 [INFO][4732] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-q6dt5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.519313 containerd[1736]: 2026-01-23 23:59:17.501 [INFO][4732] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-q6dt5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0", GenerateName:"calico-apiserver-5f88658b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"251b4c3c-e8df-4086-8bfb-8297ee672eec", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88658b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af", Pod:"calico-apiserver-5f88658b6c-q6dt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibed1104d4c8", MAC:"aa:c3:65:4d:00:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:17.519313 containerd[1736]: 2026-01-23 23:59:17.516 [INFO][4732] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-q6dt5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:17.540903 containerd[1736]: time="2026-01-23T23:59:17.540722167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:17.540903 containerd[1736]: time="2026-01-23T23:59:17.540775647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:17.540903 containerd[1736]: time="2026-01-23T23:59:17.540786167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:17.541081 containerd[1736]: time="2026-01-23T23:59:17.540864927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:17.562181 systemd[1]: Started cri-containerd-9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af.scope - libcontainer container 9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af. Jan 23 23:59:17.590926 containerd[1736]: time="2026-01-23T23:59:17.590887937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88658b6c-q6dt5,Uid:251b4c3c-e8df-4086-8bfb-8297ee672eec,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af\"" Jan 23 23:59:17.593276 containerd[1736]: time="2026-01-23T23:59:17.593246537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:17.848468 containerd[1736]: time="2026-01-23T23:59:17.848282829Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:17.851381 containerd[1736]: time="2026-01-23T23:59:17.851277149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:17.851381 containerd[1736]: time="2026-01-23T23:59:17.851335909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:17.851547 kubelet[3203]: E0123 23:59:17.851475 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:17.851547 kubelet[3203]: E0123 23:59:17.851518 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:17.852007 kubelet[3203]: E0123 23:59:17.851923 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js7r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88658b6c-q6dt5_calico-apiserver(251b4c3c-e8df-4086-8bfb-8297ee672eec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:17.853158 kubelet[3203]: E0123 23:59:17.853107 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 23 23:59:18.268583 containerd[1736]: time="2026-01-23T23:59:18.268474833Z" level=info msg="StopPodSandbox for \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\"" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.313 [INFO][4808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.313 [INFO][4808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" iface="eth0" netns="/var/run/netns/cni-3fd15d50-6f4a-f71c-6e32-658e2b24c0f9" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.314 [INFO][4808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" iface="eth0" netns="/var/run/netns/cni-3fd15d50-6f4a-f71c-6e32-658e2b24c0f9" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.314 [INFO][4808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" iface="eth0" netns="/var/run/netns/cni-3fd15d50-6f4a-f71c-6e32-658e2b24c0f9" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.315 [INFO][4808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.315 [INFO][4808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.335 [INFO][4815] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" HandleID="k8s-pod-network.df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.335 [INFO][4815] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.335 [INFO][4815] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.344 [WARNING][4815] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" HandleID="k8s-pod-network.df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.344 [INFO][4815] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" HandleID="k8s-pod-network.df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.345 [INFO][4815] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:18.348509 containerd[1736]: 2026-01-23 23:59:18.347 [INFO][4808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:18.349137 containerd[1736]: time="2026-01-23T23:59:18.349107490Z" level=info msg="TearDown network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\" successfully" Jan 23 23:59:18.349280 containerd[1736]: time="2026-01-23T23:59:18.349189290Z" level=info msg="StopPodSandbox for \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\" returns successfully" Jan 23 23:59:18.349809 containerd[1736]: time="2026-01-23T23:59:18.349782970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phrmd,Uid:89876e47-5c25-4ed8-975b-aadadd46d2c9,Namespace:calico-system,Attempt:1,}" Jan 23 23:59:18.361308 systemd[1]: run-netns-cni\x2d3fd15d50\x2d6f4a\x2df71c\x2d6e32\x2d658e2b24c0f9.mount: Deactivated successfully. Jan 23 23:59:18.499152 kubelet[3203]: E0123 23:59:18.499109 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 23 23:59:18.590051 systemd-networkd[1364]: calic24195076a8: Link UP Jan 23 23:59:18.591785 systemd-networkd[1364]: calic24195076a8: Gained carrier Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.494 [INFO][4821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0 csi-node-driver- calico-system 89876e47-5c25-4ed8-975b-aadadd46d2c9 977 0 2026-01-23 23:58:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-95a9bf6543 csi-node-driver-phrmd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic24195076a8 [] [] }} ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Namespace="calico-system" Pod="csi-node-driver-phrmd" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.495 [INFO][4821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Namespace="calico-system" Pod="csi-node-driver-phrmd" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.544 [INFO][4834] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" HandleID="k8s-pod-network.b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.544 [INFO][4834] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" HandleID="k8s-pod-network.b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-95a9bf6543", "pod":"csi-node-driver-phrmd", "timestamp":"2026-01-23 23:59:18.544548209 +0000 UTC"}, Hostname:"ci-4081.3.6-n-95a9bf6543", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.544 [INFO][4834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.544 [INFO][4834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.544 [INFO][4834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-95a9bf6543' Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.553 [INFO][4834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.557 [INFO][4834] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.561 [INFO][4834] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.563 [INFO][4834] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.565 [INFO][4834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.565 [INFO][4834] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.567 [INFO][4834] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81 Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.573 [INFO][4834] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.583 [INFO][4834] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.131/26] block=192.168.91.128/26 handle="k8s-pod-network.b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.583 [INFO][4834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.131/26] handle="k8s-pod-network.b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.583 [INFO][4834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:18.608532 containerd[1736]: 2026-01-23 23:59:18.583 [INFO][4834] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.131/26] IPv6=[] ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" HandleID="k8s-pod-network.b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:18.609719 containerd[1736]: 2026-01-23 23:59:18.586 [INFO][4821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Namespace="calico-system" Pod="csi-node-driver-phrmd" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89876e47-5c25-4ed8-975b-aadadd46d2c9", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"", Pod:"csi-node-driver-phrmd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic24195076a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:18.609719 containerd[1736]: 2026-01-23 23:59:18.586 [INFO][4821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.131/32] ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Namespace="calico-system" Pod="csi-node-driver-phrmd" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:18.609719 containerd[1736]: 2026-01-23 23:59:18.586 [INFO][4821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic24195076a8 ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Namespace="calico-system" Pod="csi-node-driver-phrmd" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:18.609719 containerd[1736]: 2026-01-23 23:59:18.592 [INFO][4821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Namespace="calico-system" Pod="csi-node-driver-phrmd" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:18.609719 containerd[1736]: 2026-01-23 23:59:18.592 [INFO][4821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Namespace="calico-system" Pod="csi-node-driver-phrmd" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89876e47-5c25-4ed8-975b-aadadd46d2c9", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81", Pod:"csi-node-driver-phrmd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic24195076a8", MAC:"3e:30:72:5a:21:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:18.609719 containerd[1736]: 2026-01-23 23:59:18.606 [INFO][4821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81" Namespace="calico-system" Pod="csi-node-driver-phrmd" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:19.264353 containerd[1736]: time="2026-01-23T23:59:19.264270394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:19.264353 containerd[1736]: time="2026-01-23T23:59:19.264321474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:19.264610 containerd[1736]: time="2026-01-23T23:59:19.264331994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:19.265704 containerd[1736]: time="2026-01-23T23:59:19.264668354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:19.272891 containerd[1736]: time="2026-01-23T23:59:19.272166596Z" level=info msg="StopPodSandbox for \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\"" Jan 23 23:59:19.274071 containerd[1736]: time="2026-01-23T23:59:19.273022276Z" level=info msg="StopPodSandbox for \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\"" Jan 23 23:59:19.300091 systemd[1]: Started cri-containerd-b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81.scope - libcontainer container b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81. Jan 23 23:59:19.359714 containerd[1736]: time="2026-01-23T23:59:19.359675333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-phrmd,Uid:89876e47-5c25-4ed8-975b-aadadd46d2c9,Namespace:calico-system,Attempt:1,} returns sandbox id \"b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81\"" Jan 23 23:59:19.362348 containerd[1736]: time="2026-01-23T23:59:19.362319534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:59:19.381519 systemd-networkd[1364]: calibed1104d4c8: Gained IPv6LL Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.378 [INFO][4898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.378 [INFO][4898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" iface="eth0" netns="/var/run/netns/cni-0e887cca-e3ce-6dba-6727-ddff51d2bb43" Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.380 [INFO][4898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" iface="eth0" netns="/var/run/netns/cni-0e887cca-e3ce-6dba-6727-ddff51d2bb43" Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.381 [INFO][4898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" iface="eth0" netns="/var/run/netns/cni-0e887cca-e3ce-6dba-6727-ddff51d2bb43" Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.381 [INFO][4898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.381 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.434 [INFO][4928] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" HandleID="k8s-pod-network.b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.434 [INFO][4928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.435 [INFO][4928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.449 [WARNING][4928] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" HandleID="k8s-pod-network.b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.450 [INFO][4928] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" HandleID="k8s-pod-network.b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.452 [INFO][4928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:19.456062 containerd[1736]: 2026-01-23 23:59:19.454 [INFO][4898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:19.456713 containerd[1736]: time="2026-01-23T23:59:19.456192353Z" level=info msg="TearDown network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\" successfully" Jan 23 23:59:19.456713 containerd[1736]: time="2026-01-23T23:59:19.456218073Z" level=info msg="StopPodSandbox for \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\" returns successfully" Jan 23 23:59:19.458810 systemd[1]: run-netns-cni\x2d0e887cca\x2de3ce\x2d6dba\x2d6727\x2dddff51d2bb43.mount: Deactivated successfully. Jan 23 23:59:19.459444 containerd[1736]: time="2026-01-23T23:59:19.459007073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6977ffbc55-s4jdp,Uid:a31be8f9-573e-4955-99b0-981cca2e99b2,Namespace:calico-system,Attempt:1,}" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.402 [INFO][4904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.404 [INFO][4904] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" iface="eth0" netns="/var/run/netns/cni-a925c989-cd0b-b171-2798-4c463a939fcc" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.404 [INFO][4904] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" iface="eth0" netns="/var/run/netns/cni-a925c989-cd0b-b171-2798-4c463a939fcc" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.404 [INFO][4904] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" iface="eth0" netns="/var/run/netns/cni-a925c989-cd0b-b171-2798-4c463a939fcc" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.404 [INFO][4904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.404 [INFO][4904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.452 [INFO][4933] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" HandleID="k8s-pod-network.abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.454 [INFO][4933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.454 [INFO][4933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.473 [WARNING][4933] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" HandleID="k8s-pod-network.abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.477 [INFO][4933] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" HandleID="k8s-pod-network.abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.478 [INFO][4933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:19.484107 containerd[1736]: 2026-01-23 23:59:19.482 [INFO][4904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:19.485008 containerd[1736]: time="2026-01-23T23:59:19.484224999Z" level=info msg="TearDown network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\" successfully" Jan 23 23:59:19.485008 containerd[1736]: time="2026-01-23T23:59:19.484247879Z" level=info msg="StopPodSandbox for \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\" returns successfully" Jan 23 23:59:19.485008 containerd[1736]: time="2026-01-23T23:59:19.484847399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w8mnm,Uid:30575e89-2706-4309-ac97-5d65652326e6,Namespace:kube-system,Attempt:1,}" Jan 23 23:59:19.489238 systemd[1]: run-netns-cni\x2da925c989\x2dcd0b\x2db171\x2d2798\x2d4c463a939fcc.mount: Deactivated successfully. Jan 23 23:59:19.501366 kubelet[3203]: E0123 23:59:19.501136 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 23 23:59:19.645384 systemd-networkd[1364]: calibaee1e843c4: Link UP Jan 23 23:59:19.648373 systemd-networkd[1364]: calibaee1e843c4: Gained carrier Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.566 [INFO][4946] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0 calico-kube-controllers-6977ffbc55- calico-system a31be8f9-573e-4955-99b0-981cca2e99b2 990 0 2026-01-23 23:58:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6977ffbc55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-95a9bf6543 calico-kube-controllers-6977ffbc55-s4jdp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibaee1e843c4 [] [] }} ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Namespace="calico-system" Pod="calico-kube-controllers-6977ffbc55-s4jdp" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.566 [INFO][4946] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Namespace="calico-system" Pod="calico-kube-controllers-6977ffbc55-s4jdp" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.599 [INFO][4964] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" HandleID="k8s-pod-network.6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.600 [INFO][4964] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" HandleID="k8s-pod-network.6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-95a9bf6543", "pod":"calico-kube-controllers-6977ffbc55-s4jdp", "timestamp":"2026-01-23 23:59:19.599923222 +0000 UTC"}, Hostname:"ci-4081.3.6-n-95a9bf6543", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.600 [INFO][4964] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.600 [INFO][4964] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.600 [INFO][4964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-95a9bf6543' Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.612 [INFO][4964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.616 [INFO][4964] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.619 [INFO][4964] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.621 [INFO][4964] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.623 [INFO][4964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.623 [INFO][4964] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.624 [INFO][4964] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51 Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.631 [INFO][4964] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.637 [INFO][4964] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.132/26] block=192.168.91.128/26 handle="k8s-pod-network.6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.637 [INFO][4964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.132/26] handle="k8s-pod-network.6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.637 [INFO][4964] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:19.664135 containerd[1736]: 2026-01-23 23:59:19.637 [INFO][4964] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.132/26] IPv6=[] ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" HandleID="k8s-pod-network.6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.664652 containerd[1736]: 2026-01-23 23:59:19.642 [INFO][4946] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Namespace="calico-system" Pod="calico-kube-controllers-6977ffbc55-s4jdp" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0", GenerateName:"calico-kube-controllers-6977ffbc55-", Namespace:"calico-system", SelfLink:"", UID:"a31be8f9-573e-4955-99b0-981cca2e99b2", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6977ffbc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"", Pod:"calico-kube-controllers-6977ffbc55-s4jdp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibaee1e843c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:19.664652 containerd[1736]: 2026-01-23 23:59:19.642 [INFO][4946] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.132/32] ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Namespace="calico-system" Pod="calico-kube-controllers-6977ffbc55-s4jdp" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.664652 containerd[1736]: 2026-01-23 23:59:19.642 [INFO][4946] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibaee1e843c4 ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Namespace="calico-system" Pod="calico-kube-controllers-6977ffbc55-s4jdp" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.664652 containerd[1736]: 2026-01-23 23:59:19.645 [INFO][4946] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Namespace="calico-system" Pod="calico-kube-controllers-6977ffbc55-s4jdp" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.664652 containerd[1736]: 2026-01-23 23:59:19.648 [INFO][4946] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Namespace="calico-system" Pod="calico-kube-controllers-6977ffbc55-s4jdp" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0", GenerateName:"calico-kube-controllers-6977ffbc55-", Namespace:"calico-system", SelfLink:"", UID:"a31be8f9-573e-4955-99b0-981cca2e99b2", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6977ffbc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51", Pod:"calico-kube-controllers-6977ffbc55-s4jdp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibaee1e843c4", MAC:"c6:74:d8:73:15:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:19.664652 containerd[1736]: 2026-01-23 23:59:19.661 [INFO][4946] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51" Namespace="calico-system" Pod="calico-kube-controllers-6977ffbc55-s4jdp" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:19.682592 containerd[1736]: time="2026-01-23T23:59:19.682328879Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:19.687041 containerd[1736]: time="2026-01-23T23:59:19.686997759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:59:19.687183 containerd[1736]: time="2026-01-23T23:59:19.687102999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:59:19.687498 kubelet[3203]: E0123 23:59:19.687321 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:19.687498 kubelet[3203]: E0123 23:59:19.687370 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:19.692011 kubelet[3203]: E0123 23:59:19.691930 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:19.693838 containerd[1736]: time="2026-01-23T23:59:19.693811561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:59:19.694223 containerd[1736]: time="2026-01-23T23:59:19.694034401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:19.694223 containerd[1736]: time="2026-01-23T23:59:19.694094961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:19.694223 containerd[1736]: time="2026-01-23T23:59:19.694106681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:19.694767 containerd[1736]: time="2026-01-23T23:59:19.694195321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:19.715075 systemd[1]: Started cri-containerd-6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51.scope - libcontainer container 6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51. Jan 23 23:59:19.763172 systemd-networkd[1364]: calide2fd386c59: Link UP Jan 23 23:59:19.766689 systemd-networkd[1364]: calide2fd386c59: Gained carrier Jan 23 23:59:19.767195 systemd-networkd[1364]: calic24195076a8: Gained IPv6LL Jan 23 23:59:19.768555 containerd[1736]: time="2026-01-23T23:59:19.768520336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6977ffbc55-s4jdp,Uid:a31be8f9-573e-4955-99b0-981cca2e99b2,Namespace:calico-system,Attempt:1,} returns sandbox id \"6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51\"" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.589 [INFO][4952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0 coredns-668d6bf9bc- kube-system 30575e89-2706-4309-ac97-5d65652326e6 991 0 2026-01-23 23:58:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-95a9bf6543 coredns-668d6bf9bc-w8mnm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide2fd386c59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8mnm" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.589 [INFO][4952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8mnm" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.616 [INFO][4972] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" HandleID="k8s-pod-network.061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.617 [INFO][4972] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" HandleID="k8s-pod-network.061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-95a9bf6543", "pod":"coredns-668d6bf9bc-w8mnm", "timestamp":"2026-01-23 23:59:19.616494625 +0000 UTC"}, Hostname:"ci-4081.3.6-n-95a9bf6543", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.617 [INFO][4972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.637 [INFO][4972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.637 [INFO][4972] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-95a9bf6543' Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.712 [INFO][4972] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.719 [INFO][4972] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.724 [INFO][4972] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.726 [INFO][4972] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.729 [INFO][4972] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.729 [INFO][4972] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.732 [INFO][4972] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1 Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.737 [INFO][4972] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.748 [INFO][4972] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.133/26] block=192.168.91.128/26 handle="k8s-pod-network.061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.748 [INFO][4972] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.133/26] handle="k8s-pod-network.061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.749 [INFO][4972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:19.785937 containerd[1736]: 2026-01-23 23:59:19.749 [INFO][4972] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.133/26] IPv6=[] ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" HandleID="k8s-pod-network.061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.786622 containerd[1736]: 2026-01-23 23:59:19.754 [INFO][4952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8mnm" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30575e89-2706-4309-ac97-5d65652326e6", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"", Pod:"coredns-668d6bf9bc-w8mnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide2fd386c59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:19.786622 containerd[1736]: 2026-01-23 23:59:19.754 [INFO][4952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.133/32] ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8mnm" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.786622 containerd[1736]: 2026-01-23 23:59:19.755 [INFO][4952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide2fd386c59 ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8mnm" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.786622 containerd[1736]: 2026-01-23 23:59:19.769 [INFO][4952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8mnm" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.786622 containerd[1736]: 2026-01-23 23:59:19.769 [INFO][4952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8mnm" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30575e89-2706-4309-ac97-5d65652326e6", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1", Pod:"coredns-668d6bf9bc-w8mnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide2fd386c59", MAC:"2e:38:e4:ee:77:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:19.786622 containerd[1736]: 2026-01-23 23:59:19.782 [INFO][4952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8mnm" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:19.808982 containerd[1736]: time="2026-01-23T23:59:19.808790104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:19.808982 containerd[1736]: time="2026-01-23T23:59:19.808835464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:19.808982 containerd[1736]: time="2026-01-23T23:59:19.808845744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:19.808982 containerd[1736]: time="2026-01-23T23:59:19.808917984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:19.823149 systemd[1]: Started cri-containerd-061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1.scope - libcontainer container 061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1. Jan 23 23:59:19.855009 containerd[1736]: time="2026-01-23T23:59:19.854905913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w8mnm,Uid:30575e89-2706-4309-ac97-5d65652326e6,Namespace:kube-system,Attempt:1,} returns sandbox id \"061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1\"" Jan 23 23:59:19.863285 containerd[1736]: time="2026-01-23T23:59:19.863243155Z" level=info msg="CreateContainer within sandbox \"061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:59:19.913176 containerd[1736]: time="2026-01-23T23:59:19.913076405Z" level=info msg="CreateContainer within sandbox \"061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90b624a4e4421cf5f92955e87b1498e64a2ca019bd5966661797f1480685cc9a\"" Jan 23 23:59:19.919756 containerd[1736]: time="2026-01-23T23:59:19.918974766Z" level=info msg="StartContainer for \"90b624a4e4421cf5f92955e87b1498e64a2ca019bd5966661797f1480685cc9a\"" Jan 23 23:59:19.944179 systemd[1]: Started cri-containerd-90b624a4e4421cf5f92955e87b1498e64a2ca019bd5966661797f1480685cc9a.scope - libcontainer container 90b624a4e4421cf5f92955e87b1498e64a2ca019bd5966661797f1480685cc9a. Jan 23 23:59:19.962832 containerd[1736]: time="2026-01-23T23:59:19.962683735Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:19.967132 containerd[1736]: time="2026-01-23T23:59:19.967022856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:59:19.967132 containerd[1736]: time="2026-01-23T23:59:19.967110536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:59:19.967240 kubelet[3203]: E0123 23:59:19.967195 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:19.967240 kubelet[3203]: E0123 23:59:19.967236 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:19.967492 kubelet[3203]: E0123 23:59:19.967441 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:19.968687 kubelet[3203]: E0123 23:59:19.968652 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:19.968781 containerd[1736]: time="2026-01-23T23:59:19.968729216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:59:19.980738 containerd[1736]: time="2026-01-23T23:59:19.980572819Z" level=info msg="StartContainer for \"90b624a4e4421cf5f92955e87b1498e64a2ca019bd5966661797f1480685cc9a\" returns successfully" Jan 23 23:59:20.211097 containerd[1736]: time="2026-01-23T23:59:20.210739705Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:20.215355 containerd[1736]: time="2026-01-23T23:59:20.215193546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:59:20.215355 containerd[1736]: time="2026-01-23T23:59:20.215246706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:59:20.215520 kubelet[3203]: E0123 23:59:20.215471 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:59:20.215590 kubelet[3203]: E0123 23:59:20.215519 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:59:20.215896 kubelet[3203]: E0123 23:59:20.215693 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmjjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6977ffbc55-s4jdp_calico-system(a31be8f9-573e-4955-99b0-981cca2e99b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:20.216915 kubelet[3203]: E0123 23:59:20.216871 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 23 23:59:20.506208 kubelet[3203]: E0123 23:59:20.506161 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 23 23:59:20.508710 kubelet[3203]: E0123 23:59:20.508669 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:20.537763 kubelet[3203]: I0123 23:59:20.537704 3203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w8mnm" podStartSLOduration=42.537690731 podStartE2EDuration="42.537690731s" podCreationTimestamp="2026-01-23 23:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:59:20.536746411 +0000 UTC m=+48.385813493" watchObservedRunningTime="2026-01-23 23:59:20.537690731 +0000 UTC m=+48.386757813" Jan 23 23:59:20.789234 systemd-networkd[1364]: calibaee1e843c4: Gained IPv6LL Jan 23 23:59:21.270251 containerd[1736]: time="2026-01-23T23:59:21.269978199Z" level=info msg="StopPodSandbox for \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\"" Jan 23 23:59:21.270251 containerd[1736]: time="2026-01-23T23:59:21.270023399Z" level=info msg="StopPodSandbox for \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\"" Jan 23 23:59:21.271650 containerd[1736]: time="2026-01-23T23:59:21.271514519Z" level=info msg="StopPodSandbox for \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\"" Jan 23 23:59:21.271734 containerd[1736]: time="2026-01-23T23:59:21.271703799Z" level=info msg="StopPodSandbox for \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\"" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.382 [INFO][5150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.383 [INFO][5150] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" iface="eth0" netns="/var/run/netns/cni-6ef1d604-e7dd-79cb-5672-ea45af80274d" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.383 [INFO][5150] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" iface="eth0" netns="/var/run/netns/cni-6ef1d604-e7dd-79cb-5672-ea45af80274d" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.386 [INFO][5150] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" iface="eth0" netns="/var/run/netns/cni-6ef1d604-e7dd-79cb-5672-ea45af80274d" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.386 [INFO][5150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.386 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.436 [INFO][5188] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" HandleID="k8s-pod-network.728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.436 [INFO][5188] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.436 [INFO][5188] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.452 [WARNING][5188] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" HandleID="k8s-pod-network.728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.452 [INFO][5188] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" HandleID="k8s-pod-network.728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.456 [INFO][5188] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:21.466298 containerd[1736]: 2026-01-23 23:59:21.463 [INFO][5150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:21.471431 containerd[1736]: time="2026-01-23T23:59:21.467638879Z" level=info msg="TearDown network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\" successfully" Jan 23 23:59:21.471431 containerd[1736]: time="2026-01-23T23:59:21.471405479Z" level=info msg="StopPodSandbox for \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\" returns successfully" Jan 23 23:59:21.474119 systemd[1]: run-netns-cni\x2d6ef1d604\x2de7dd\x2d79cb\x2d5672\x2dea45af80274d.mount: Deactivated successfully. Jan 23 23:59:21.478342 containerd[1736]: time="2026-01-23T23:59:21.478305601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snw5g,Uid:5d2e99f6-dee0-4678-aa07-fbf33b420e68,Namespace:kube-system,Attempt:1,}" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.373 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.375 [INFO][5164] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" iface="eth0" netns="/var/run/netns/cni-d074e071-f0dd-f502-2e56-5b44de45839f" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.376 [INFO][5164] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" iface="eth0" netns="/var/run/netns/cni-d074e071-f0dd-f502-2e56-5b44de45839f" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.380 [INFO][5164] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" iface="eth0" netns="/var/run/netns/cni-d074e071-f0dd-f502-2e56-5b44de45839f" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.380 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.380 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.436 [INFO][5182] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" HandleID="k8s-pod-network.8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.436 [INFO][5182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.456 [INFO][5182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.472 [WARNING][5182] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" HandleID="k8s-pod-network.8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.472 [INFO][5182] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" HandleID="k8s-pod-network.8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.474 [INFO][5182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:21.484923 containerd[1736]: 2026-01-23 23:59:21.480 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:21.488143 containerd[1736]: time="2026-01-23T23:59:21.485111842Z" level=info msg="TearDown network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\" successfully" Jan 23 23:59:21.488143 containerd[1736]: time="2026-01-23T23:59:21.485130162Z" level=info msg="StopPodSandbox for \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\" returns successfully" Jan 23 23:59:21.488143 containerd[1736]: time="2026-01-23T23:59:21.486198882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674d7cd84f-5hq44,Uid:e0b5e5a7-1acb-4d63-8673-57e3c939b318,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:59:21.490930 systemd[1]: run-netns-cni\x2dd074e071\x2df0dd\x2df502\x2d2e56\x2d5b44de45839f.mount: Deactivated successfully. Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.402 [INFO][5154] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.403 [INFO][5154] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" iface="eth0" netns="/var/run/netns/cni-8b30047c-a242-df36-30a0-78d6d9b268f8" Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.403 [INFO][5154] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" iface="eth0" netns="/var/run/netns/cni-8b30047c-a242-df36-30a0-78d6d9b268f8" Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.403 [INFO][5154] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" iface="eth0" netns="/var/run/netns/cni-8b30047c-a242-df36-30a0-78d6d9b268f8" Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.403 [INFO][5154] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.404 [INFO][5154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.453 [INFO][5195] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" HandleID="k8s-pod-network.31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.453 [INFO][5195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.474 [INFO][5195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.489 [WARNING][5195] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" HandleID="k8s-pod-network.31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.489 [INFO][5195] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" HandleID="k8s-pod-network.31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.492 [INFO][5195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:21.495508 containerd[1736]: 2026-01-23 23:59:21.494 [INFO][5154] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:21.495980 containerd[1736]: time="2026-01-23T23:59:21.495854444Z" level=info msg="TearDown network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\" successfully" Jan 23 23:59:21.495980 containerd[1736]: time="2026-01-23T23:59:21.495875764Z" level=info msg="StopPodSandbox for \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\" returns successfully" Jan 23 23:59:21.496542 containerd[1736]: time="2026-01-23T23:59:21.496376604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-27fdn,Uid:693475f7-1f52-409e-89ad-83367b27d7ef,Namespace:calico-system,Attempt:1,}" Jan 23 23:59:21.502409 systemd[1]: run-netns-cni\x2d8b30047c\x2da242\x2ddf36\x2d30a0\x2d78d6d9b268f8.mount: Deactivated successfully. Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.424 [INFO][5165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.427 [INFO][5165] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" iface="eth0" netns="/var/run/netns/cni-0977bd37-129b-0784-9e81-d59ad0259789" Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.427 [INFO][5165] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" iface="eth0" netns="/var/run/netns/cni-0977bd37-129b-0784-9e81-d59ad0259789" Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.427 [INFO][5165] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" iface="eth0" netns="/var/run/netns/cni-0977bd37-129b-0784-9e81-d59ad0259789" Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.427 [INFO][5165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.427 [INFO][5165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.486 [INFO][5201] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" HandleID="k8s-pod-network.f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.487 [INFO][5201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.492 [INFO][5201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.506 [WARNING][5201] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" HandleID="k8s-pod-network.f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.506 [INFO][5201] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" HandleID="k8s-pod-network.f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.510 [INFO][5201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:21.516102 containerd[1736]: 2026-01-23 23:59:21.514 [INFO][5165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:21.518840 systemd[1]: run-netns-cni\x2d0977bd37\x2d129b\x2d0784\x2d9e81\x2dd59ad0259789.mount: Deactivated successfully. Jan 23 23:59:21.520354 kubelet[3203]: E0123 23:59:21.519950 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 23 23:59:21.520613 containerd[1736]: time="2026-01-23T23:59:21.519120809Z" level=info msg="TearDown network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\" successfully" Jan 23 23:59:21.520613 containerd[1736]: time="2026-01-23T23:59:21.519143969Z" level=info msg="StopPodSandbox for \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\" returns successfully" Jan 23 23:59:21.520613 containerd[1736]: time="2026-01-23T23:59:21.519662409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88658b6c-p27j5,Uid:849bc66d-ccf9-400e-bccb-fea5f90abeb0,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:59:21.801714 systemd-networkd[1364]: calid39944a0de9: Link UP Jan 23 23:59:21.804632 systemd-networkd[1364]: calid39944a0de9: Gained carrier Jan 23 23:59:21.814327 systemd-networkd[1364]: calide2fd386c59: Gained IPv6LL Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.683 [INFO][5224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0 calico-apiserver-674d7cd84f- calico-apiserver e0b5e5a7-1acb-4d63-8673-57e3c939b318 1035 0 2026-01-23 23:58:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:674d7cd84f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-95a9bf6543 calico-apiserver-674d7cd84f-5hq44 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid39944a0de9 [] [] }} ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Namespace="calico-apiserver" Pod="calico-apiserver-674d7cd84f-5hq44" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.684 [INFO][5224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Namespace="calico-apiserver" Pod="calico-apiserver-674d7cd84f-5hq44" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.732 [INFO][5272] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" HandleID="k8s-pod-network.51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.733 [INFO][5272] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" HandleID="k8s-pod-network.51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cbcd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-95a9bf6543", "pod":"calico-apiserver-674d7cd84f-5hq44", "timestamp":"2026-01-23 23:59:21.732668252 +0000 UTC"}, Hostname:"ci-4081.3.6-n-95a9bf6543", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.733 [INFO][5272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.733 [INFO][5272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.733 [INFO][5272] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-95a9bf6543' Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.754 [INFO][5272] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.763 [INFO][5272] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.768 [INFO][5272] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.770 [INFO][5272] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.773 [INFO][5272] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.773 [INFO][5272] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.775 [INFO][5272] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.779 [INFO][5272] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.787 [INFO][5272] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.134/26] block=192.168.91.128/26 handle="k8s-pod-network.51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.787 [INFO][5272] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.134/26] handle="k8s-pod-network.51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.787 [INFO][5272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:21.826322 containerd[1736]: 2026-01-23 23:59:21.787 [INFO][5272] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.134/26] IPv6=[] ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" HandleID="k8s-pod-network.51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.827112 containerd[1736]: 2026-01-23 23:59:21.790 [INFO][5224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Namespace="calico-apiserver" Pod="calico-apiserver-674d7cd84f-5hq44" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0", GenerateName:"calico-apiserver-674d7cd84f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0b5e5a7-1acb-4d63-8673-57e3c939b318", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674d7cd84f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"", Pod:"calico-apiserver-674d7cd84f-5hq44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid39944a0de9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:21.827112 containerd[1736]: 2026-01-23 23:59:21.795 [INFO][5224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.134/32] ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Namespace="calico-apiserver" Pod="calico-apiserver-674d7cd84f-5hq44" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.827112 containerd[1736]: 2026-01-23 23:59:21.796 [INFO][5224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid39944a0de9 ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Namespace="calico-apiserver" Pod="calico-apiserver-674d7cd84f-5hq44" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.827112 containerd[1736]: 2026-01-23 23:59:21.804 [INFO][5224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Namespace="calico-apiserver" Pod="calico-apiserver-674d7cd84f-5hq44" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.827112 containerd[1736]: 2026-01-23 23:59:21.805 [INFO][5224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Namespace="calico-apiserver" Pod="calico-apiserver-674d7cd84f-5hq44" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0", GenerateName:"calico-apiserver-674d7cd84f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0b5e5a7-1acb-4d63-8673-57e3c939b318", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674d7cd84f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe", Pod:"calico-apiserver-674d7cd84f-5hq44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid39944a0de9", MAC:"d6:f2:fb:78:6c:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:21.827112 containerd[1736]: 2026-01-23 23:59:21.824 [INFO][5224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe" Namespace="calico-apiserver" Pod="calico-apiserver-674d7cd84f-5hq44" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:21.848776 containerd[1736]: time="2026-01-23T23:59:21.848561955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:21.848776 containerd[1736]: time="2026-01-23T23:59:21.848625755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:21.848776 containerd[1736]: time="2026-01-23T23:59:21.848636715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:21.848776 containerd[1736]: time="2026-01-23T23:59:21.848712196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:21.867107 systemd[1]: Started cri-containerd-51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe.scope - libcontainer container 51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe. Jan 23 23:59:21.905625 systemd-networkd[1364]: cali30eeb51b4ec: Link UP Jan 23 23:59:21.905828 systemd-networkd[1364]: cali30eeb51b4ec: Gained carrier Jan 23 23:59:21.923407 containerd[1736]: time="2026-01-23T23:59:21.923264771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674d7cd84f-5hq44,Uid:e0b5e5a7-1acb-4d63-8673-57e3c939b318,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe\"" Jan 23 23:59:21.926022 containerd[1736]: time="2026-01-23T23:59:21.925294291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.669 [INFO][5215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0 coredns-668d6bf9bc- kube-system 5d2e99f6-dee0-4678-aa07-fbf33b420e68 1036 0 2026-01-23 23:58:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-95a9bf6543 coredns-668d6bf9bc-snw5g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali30eeb51b4ec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-snw5g" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.669 [INFO][5215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-snw5g" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.736 [INFO][5265] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" HandleID="k8s-pod-network.1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.736 [INFO][5265] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" HandleID="k8s-pod-network.1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-95a9bf6543", "pod":"coredns-668d6bf9bc-snw5g", "timestamp":"2026-01-23 23:59:21.736051053 +0000 UTC"}, Hostname:"ci-4081.3.6-n-95a9bf6543", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.737 [INFO][5265] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.787 [INFO][5265] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.787 [INFO][5265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-95a9bf6543' Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.854 [INFO][5265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.865 [INFO][5265] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.870 [INFO][5265] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.872 [INFO][5265] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.874 [INFO][5265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.874 [INFO][5265] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.875 [INFO][5265] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.882 [INFO][5265] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.892 [INFO][5265] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.135/26] block=192.168.91.128/26 handle="k8s-pod-network.1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.892 [INFO][5265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.135/26] handle="k8s-pod-network.1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.892 [INFO][5265] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:21.932088 containerd[1736]: 2026-01-23 23:59:21.892 [INFO][5265] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.135/26] IPv6=[] ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" HandleID="k8s-pod-network.1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.932672 containerd[1736]: 2026-01-23 23:59:21.894 [INFO][5215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-snw5g" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d2e99f6-dee0-4678-aa07-fbf33b420e68", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"", Pod:"coredns-668d6bf9bc-snw5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30eeb51b4ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:21.932672 containerd[1736]: 2026-01-23 23:59:21.894 [INFO][5215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.135/32] ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-snw5g" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.932672 containerd[1736]: 2026-01-23 23:59:21.894 [INFO][5215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30eeb51b4ec ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-snw5g" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.932672 containerd[1736]: 2026-01-23 23:59:21.908 [INFO][5215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-snw5g" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.932672 containerd[1736]: 2026-01-23 23:59:21.910 [INFO][5215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-snw5g" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d2e99f6-dee0-4678-aa07-fbf33b420e68", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd", Pod:"coredns-668d6bf9bc-snw5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30eeb51b4ec", MAC:"2e:81:2d:8b:e9:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:21.932672 containerd[1736]: 2026-01-23 23:59:21.929 [INFO][5215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-snw5g" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:21.970714 containerd[1736]: time="2026-01-23T23:59:21.957161257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:21.970714 containerd[1736]: time="2026-01-23T23:59:21.957220737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:21.970714 containerd[1736]: time="2026-01-23T23:59:21.957235377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:21.970714 containerd[1736]: time="2026-01-23T23:59:21.957309777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:21.994109 systemd[1]: Started cri-containerd-1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd.scope - libcontainer container 1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd. Jan 23 23:59:22.007669 systemd-networkd[1364]: califc09ccfe405: Link UP Jan 23 23:59:22.010396 systemd-networkd[1364]: califc09ccfe405: Gained carrier Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.696 [INFO][5247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0 goldmane-666569f655- calico-system 693475f7-1f52-409e-89ad-83367b27d7ef 1037 0 2026-01-23 23:58:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-95a9bf6543 goldmane-666569f655-27fdn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califc09ccfe405 [] [] }} ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Namespace="calico-system" Pod="goldmane-666569f655-27fdn" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.696 [INFO][5247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Namespace="calico-system" Pod="goldmane-666569f655-27fdn" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.751 [INFO][5281] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" HandleID="k8s-pod-network.4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.752 [INFO][5281] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" HandleID="k8s-pod-network.4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-95a9bf6543", "pod":"goldmane-666569f655-27fdn", "timestamp":"2026-01-23 23:59:21.751988336 +0000 UTC"}, Hostname:"ci-4081.3.6-n-95a9bf6543", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.753 [INFO][5281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.892 [INFO][5281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.892 [INFO][5281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-95a9bf6543' Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.960 [INFO][5281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.965 [INFO][5281] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.969 [INFO][5281] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.971 [INFO][5281] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.973 [INFO][5281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.973 [INFO][5281] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.975 [INFO][5281] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77 Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.981 [INFO][5281] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.991 [INFO][5281] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.136/26] block=192.168.91.128/26 handle="k8s-pod-network.4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.991 [INFO][5281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.136/26] handle="k8s-pod-network.4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.992 [INFO][5281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:22.045508 containerd[1736]: 2026-01-23 23:59:21.992 [INFO][5281] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.136/26] IPv6=[] ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" HandleID="k8s-pod-network.4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:22.046227 containerd[1736]: 2026-01-23 23:59:21.997 [INFO][5247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Namespace="calico-system" Pod="goldmane-666569f655-27fdn" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"693475f7-1f52-409e-89ad-83367b27d7ef", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"", Pod:"goldmane-666569f655-27fdn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califc09ccfe405", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:22.046227 containerd[1736]: 2026-01-23 23:59:21.998 [INFO][5247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.136/32] ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Namespace="calico-system" Pod="goldmane-666569f655-27fdn" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:22.046227 containerd[1736]: 2026-01-23 23:59:21.998 [INFO][5247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc09ccfe405 ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Namespace="calico-system" Pod="goldmane-666569f655-27fdn" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:22.046227 containerd[1736]: 2026-01-23 23:59:22.017 [INFO][5247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Namespace="calico-system" Pod="goldmane-666569f655-27fdn" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:22.046227 containerd[1736]: 2026-01-23 23:59:22.019 [INFO][5247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Namespace="calico-system" Pod="goldmane-666569f655-27fdn" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"693475f7-1f52-409e-89ad-83367b27d7ef", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77", Pod:"goldmane-666569f655-27fdn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califc09ccfe405", MAC:"ee:6d:93:5b:ba:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:22.046227 containerd[1736]: 2026-01-23 23:59:22.038 [INFO][5247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77" Namespace="calico-system" Pod="goldmane-666569f655-27fdn" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:22.061602 containerd[1736]: time="2026-01-23T23:59:22.059737038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snw5g,Uid:5d2e99f6-dee0-4678-aa07-fbf33b420e68,Namespace:kube-system,Attempt:1,} returns sandbox id \"1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd\"" Jan 23 23:59:22.095610 containerd[1736]: time="2026-01-23T23:59:22.095561005Z" level=info msg="CreateContainer within sandbox \"1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:59:22.106604 containerd[1736]: time="2026-01-23T23:59:22.106119167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:22.106604 containerd[1736]: time="2026-01-23T23:59:22.106195247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:22.106604 containerd[1736]: time="2026-01-23T23:59:22.106209847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:22.106604 containerd[1736]: time="2026-01-23T23:59:22.106301047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:22.130232 systemd-networkd[1364]: calia7e937803c7: Link UP Jan 23 23:59:22.130908 systemd-networkd[1364]: calia7e937803c7: Gained carrier Jan 23 23:59:22.131221 systemd[1]: Started cri-containerd-4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77.scope - libcontainer container 4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77. Jan 23 23:59:22.145662 containerd[1736]: time="2026-01-23T23:59:22.145603655Z" level=info msg="CreateContainer within sandbox \"1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38e93f849a48d87fe83266a30afb6584bd0f575bc6b97e2f8e6ad695dff8c4d4\"" Jan 23 23:59:22.147238 containerd[1736]: time="2026-01-23T23:59:22.146398576Z" level=info msg="StartContainer for \"38e93f849a48d87fe83266a30afb6584bd0f575bc6b97e2f8e6ad695dff8c4d4\"" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:21.686 [INFO][5236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0 calico-apiserver-5f88658b6c- calico-apiserver 849bc66d-ccf9-400e-bccb-fea5f90abeb0 1038 0 2026-01-23 23:58:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f88658b6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-95a9bf6543 calico-apiserver-5f88658b6c-p27j5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia7e937803c7 [] [] }} ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-p27j5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:21.686 [INFO][5236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-p27j5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:21.751 [INFO][5270] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" HandleID="k8s-pod-network.4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:21.753 [INFO][5270] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" HandleID="k8s-pod-network.4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000392bf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-95a9bf6543", "pod":"calico-apiserver-5f88658b6c-p27j5", "timestamp":"2026-01-23 23:59:21.751282576 +0000 UTC"}, Hostname:"ci-4081.3.6-n-95a9bf6543", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:21.753 [INFO][5270] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:21.992 [INFO][5270] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:21.992 [INFO][5270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-95a9bf6543' Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.055 [INFO][5270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.067 [INFO][5270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.075 [INFO][5270] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.081 [INFO][5270] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.088 [INFO][5270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.088 [INFO][5270] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.092 [INFO][5270] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450 Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.103 [INFO][5270] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.117 [INFO][5270] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.137/26] block=192.168.91.128/26 handle="k8s-pod-network.4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.117 [INFO][5270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.137/26] handle="k8s-pod-network.4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" host="ci-4081.3.6-n-95a9bf6543" Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.117 [INFO][5270] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:22.156679 containerd[1736]: 2026-01-23 23:59:22.118 [INFO][5270] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.137/26] IPv6=[] ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" HandleID="k8s-pod-network.4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:22.157466 containerd[1736]: 2026-01-23 23:59:22.124 [INFO][5236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-p27j5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0", GenerateName:"calico-apiserver-5f88658b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"849bc66d-ccf9-400e-bccb-fea5f90abeb0", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88658b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"", Pod:"calico-apiserver-5f88658b6c-p27j5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7e937803c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:22.157466 containerd[1736]: 2026-01-23 23:59:22.125 [INFO][5236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.137/32] ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-p27j5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:22.157466 containerd[1736]: 2026-01-23 23:59:22.125 [INFO][5236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7e937803c7 ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-p27j5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:22.157466 containerd[1736]: 2026-01-23 23:59:22.132 [INFO][5236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-p27j5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:22.157466 containerd[1736]: 2026-01-23 23:59:22.134 [INFO][5236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-p27j5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0", GenerateName:"calico-apiserver-5f88658b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"849bc66d-ccf9-400e-bccb-fea5f90abeb0", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88658b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450", Pod:"calico-apiserver-5f88658b6c-p27j5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7e937803c7", MAC:"c2:e6:c1:d1:1d:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:22.157466 containerd[1736]: 2026-01-23 23:59:22.148 [INFO][5236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450" Namespace="calico-apiserver" Pod="calico-apiserver-5f88658b6c-p27j5" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:22.193163 systemd[1]: Started cri-containerd-38e93f849a48d87fe83266a30afb6584bd0f575bc6b97e2f8e6ad695dff8c4d4.scope - libcontainer container 38e93f849a48d87fe83266a30afb6584bd0f575bc6b97e2f8e6ad695dff8c4d4. Jan 23 23:59:22.197906 containerd[1736]: time="2026-01-23T23:59:22.197864306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-27fdn,Uid:693475f7-1f52-409e-89ad-83367b27d7ef,Namespace:calico-system,Attempt:1,} returns sandbox id \"4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77\"" Jan 23 23:59:22.206642 containerd[1736]: time="2026-01-23T23:59:22.206440788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:22.206642 containerd[1736]: time="2026-01-23T23:59:22.206605628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:22.206779 containerd[1736]: time="2026-01-23T23:59:22.206631668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:22.206953 containerd[1736]: time="2026-01-23T23:59:22.206793108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:22.212136 containerd[1736]: time="2026-01-23T23:59:22.211841309Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:22.217118 containerd[1736]: time="2026-01-23T23:59:22.217075190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:22.218442 containerd[1736]: time="2026-01-23T23:59:22.218363550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:22.219083 kubelet[3203]: E0123 23:59:22.219046 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:22.219261 kubelet[3203]: E0123 23:59:22.219158 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:22.220071 kubelet[3203]: E0123 23:59:22.219523 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtpcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674d7cd84f-5hq44_calico-apiserver(e0b5e5a7-1acb-4d63-8673-57e3c939b318): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:22.221222 kubelet[3203]: E0123 23:59:22.221180 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 23 23:59:22.222218 containerd[1736]: time="2026-01-23T23:59:22.222192671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:59:22.231180 systemd[1]: Started cri-containerd-4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450.scope - libcontainer container 4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450. Jan 23 23:59:22.244323 containerd[1736]: time="2026-01-23T23:59:22.243767435Z" level=info msg="StartContainer for \"38e93f849a48d87fe83266a30afb6584bd0f575bc6b97e2f8e6ad695dff8c4d4\" returns successfully" Jan 23 23:59:22.311674 containerd[1736]: time="2026-01-23T23:59:22.311524889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f88658b6c-p27j5,Uid:849bc66d-ccf9-400e-bccb-fea5f90abeb0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450\"" Jan 23 23:59:22.477922 containerd[1736]: time="2026-01-23T23:59:22.477714082Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:22.483274 containerd[1736]: time="2026-01-23T23:59:22.483236123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:59:22.483503 containerd[1736]: time="2026-01-23T23:59:22.483320483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:22.483623 kubelet[3203]: E0123 23:59:22.483585 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:59:22.483678 kubelet[3203]: E0123 23:59:22.483632 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:59:22.484123 kubelet[3203]: E0123 23:59:22.483856 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nbqht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-27fdn_calico-system(693475f7-1f52-409e-89ad-83367b27d7ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:22.484338 containerd[1736]: time="2026-01-23T23:59:22.483890284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:22.485644 kubelet[3203]: E0123 23:59:22.485604 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 23 23:59:22.523844 kubelet[3203]: E0123 23:59:22.523709 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 23 23:59:22.529108 kubelet[3203]: E0123 23:59:22.529055 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 23 23:59:22.559401 kubelet[3203]: I0123 23:59:22.559343 3203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-snw5g" podStartSLOduration=44.559325418 podStartE2EDuration="44.559325418s" podCreationTimestamp="2026-01-23 23:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:59:22.541089655 +0000 UTC m=+50.390156737" watchObservedRunningTime="2026-01-23 23:59:22.559325418 +0000 UTC m=+50.408392500" Jan 23 23:59:22.736469 containerd[1736]: time="2026-01-23T23:59:22.736278093Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:22.739771 containerd[1736]: time="2026-01-23T23:59:22.739676613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:22.739771 containerd[1736]: time="2026-01-23T23:59:22.739740453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:22.740099 kubelet[3203]: E0123 23:59:22.739854 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:22.740099 kubelet[3203]: E0123 23:59:22.739899 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:22.740099 kubelet[3203]: E0123 23:59:22.740035 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrx8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88658b6c-p27j5_calico-apiserver(849bc66d-ccf9-400e-bccb-fea5f90abeb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:22.741587 kubelet[3203]: E0123 23:59:22.741557 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 23 23:59:23.221096 systemd-networkd[1364]: califc09ccfe405: Gained IPv6LL Jan 23 23:59:23.532636 kubelet[3203]: E0123 23:59:23.532319 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 23 23:59:23.532636 kubelet[3203]: E0123 23:59:23.532347 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 23 23:59:23.533332 kubelet[3203]: E0123 23:59:23.532669 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 23 23:59:23.542144 systemd-networkd[1364]: calid39944a0de9: Gained IPv6LL Jan 23 23:59:23.605142 systemd-networkd[1364]: cali30eeb51b4ec: Gained IPv6LL Jan 23 23:59:23.861138 systemd-networkd[1364]: calia7e937803c7: Gained IPv6LL Jan 23 23:59:25.484625 kubelet[3203]: I0123 23:59:25.484381 3203 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:59:28.290851 containerd[1736]: time="2026-01-23T23:59:28.290809330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:59:28.542781 containerd[1736]: time="2026-01-23T23:59:28.542644779Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:28.545716 containerd[1736]: time="2026-01-23T23:59:28.545619740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:59:28.545716 containerd[1736]: time="2026-01-23T23:59:28.545672780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:59:28.545866 kubelet[3203]: E0123 23:59:28.545812 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:59:28.545866 kubelet[3203]: E0123 23:59:28.545858 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:59:28.552420 kubelet[3203]: E0123 23:59:28.552361 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7e3e9b460e424236a2b5a2375c5d7b77,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787b66fb85-crtpt_calico-system(8ee41f25-89f1-4519-b99e-33fdb651ce3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:28.554916 containerd[1736]: time="2026-01-23T23:59:28.554886422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:59:28.826128 containerd[1736]: time="2026-01-23T23:59:28.825723634Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:28.831019 containerd[1736]: time="2026-01-23T23:59:28.830280275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:59:28.831019 containerd[1736]: time="2026-01-23T23:59:28.830362035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:59:28.831228 kubelet[3203]: E0123 23:59:28.831168 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:59:28.831228 kubelet[3203]: E0123 23:59:28.831213 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:59:28.832086 kubelet[3203]: E0123 23:59:28.831308 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787b66fb85-crtpt_calico-system(8ee41f25-89f1-4519-b99e-33fdb651ce3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:28.832730 kubelet[3203]: E0123 23:59:28.832691 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 23 23:59:30.271449 containerd[1736]: time="2026-01-23T23:59:30.271411115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:30.514145 containerd[1736]: time="2026-01-23T23:59:30.514099680Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:30.518418 containerd[1736]: time="2026-01-23T23:59:30.518374721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:30.518502 containerd[1736]: time="2026-01-23T23:59:30.518484281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:30.518684 kubelet[3203]: E0123 23:59:30.518648 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:30.518976 kubelet[3203]: E0123 23:59:30.518696 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:30.518976 kubelet[3203]: E0123 23:59:30.518833 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js7r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88658b6c-q6dt5_calico-apiserver(251b4c3c-e8df-4086-8bfb-8297ee672eec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:30.520345 kubelet[3203]: E0123 23:59:30.520300 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 23 23:59:32.274204 containerd[1736]: time="2026-01-23T23:59:32.274042501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:59:32.287634 containerd[1736]: time="2026-01-23T23:59:32.287319663Z" level=info msg="StopPodSandbox for \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\"" Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.348 [WARNING][5603] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d2e99f6-dee0-4678-aa07-fbf33b420e68", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd", Pod:"coredns-668d6bf9bc-snw5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30eeb51b4ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.349 [INFO][5603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.349 [INFO][5603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" iface="eth0" netns="" Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.349 [INFO][5603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.349 [INFO][5603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.371 [INFO][5611] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" HandleID="k8s-pod-network.728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.371 [INFO][5611] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.371 [INFO][5611] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.383 [WARNING][5611] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" HandleID="k8s-pod-network.728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.383 [INFO][5611] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" HandleID="k8s-pod-network.728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.384 [INFO][5611] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:32.388021 containerd[1736]: 2026-01-23 23:59:32.386 [INFO][5603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:32.388477 containerd[1736]: time="2026-01-23T23:59:32.388070118Z" level=info msg="TearDown network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\" successfully" Jan 23 23:59:32.388477 containerd[1736]: time="2026-01-23T23:59:32.388095238Z" level=info msg="StopPodSandbox for \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\" returns successfully" Jan 23 23:59:32.388908 containerd[1736]: time="2026-01-23T23:59:32.388888839Z" level=info msg="RemovePodSandbox for \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\"" Jan 23 23:59:32.389004 containerd[1736]: time="2026-01-23T23:59:32.388914919Z" level=info msg="Forcibly stopping sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\"" Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.422 [WARNING][5625] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d2e99f6-dee0-4678-aa07-fbf33b420e68", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"1b56a8416a42fdb0fdcb5ea24af7b9bd270ca8e5b929b0e62e9e95e3f5b50fcd", Pod:"coredns-668d6bf9bc-snw5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30eeb51b4ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.422 [INFO][5625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.422 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" iface="eth0" netns="" Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.423 [INFO][5625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.423 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.456 [INFO][5632] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" HandleID="k8s-pod-network.728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.456 [INFO][5632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.456 [INFO][5632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.472 [WARNING][5632] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" HandleID="k8s-pod-network.728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.472 [INFO][5632] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" HandleID="k8s-pod-network.728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--snw5g-eth0" Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.475 [INFO][5632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:32.478826 containerd[1736]: 2026-01-23 23:59:32.477 [INFO][5625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878" Jan 23 23:59:32.479371 containerd[1736]: time="2026-01-23T23:59:32.478863772Z" level=info msg="TearDown network for sandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\" successfully" Jan 23 23:59:32.492775 containerd[1736]: time="2026-01-23T23:59:32.492724334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:32.492870 containerd[1736]: time="2026-01-23T23:59:32.492810494Z" level=info msg="RemovePodSandbox \"728a424675f7599ed97f3d09a9053611719867c02e1aece6d43e486a65d48878\" returns successfully" Jan 23 23:59:32.493578 containerd[1736]: time="2026-01-23T23:59:32.493318774Z" level=info msg="StopPodSandbox for \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\"" Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.525 [WARNING][5646] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30575e89-2706-4309-ac97-5d65652326e6", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1", Pod:"coredns-668d6bf9bc-w8mnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide2fd386c59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.526 [INFO][5646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.526 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" iface="eth0" netns="" Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.526 [INFO][5646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.526 [INFO][5646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.544 [INFO][5654] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" HandleID="k8s-pod-network.abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.544 [INFO][5654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.544 [INFO][5654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.552 [WARNING][5654] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" HandleID="k8s-pod-network.abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.552 [INFO][5654] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" HandleID="k8s-pod-network.abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.553 [INFO][5654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:32.557358 containerd[1736]: 2026-01-23 23:59:32.555 [INFO][5646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:32.557358 containerd[1736]: time="2026-01-23T23:59:32.557307904Z" level=info msg="TearDown network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\" successfully" Jan 23 23:59:32.558413 containerd[1736]: time="2026-01-23T23:59:32.557333704Z" level=info msg="StopPodSandbox for \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\" returns successfully" Jan 23 23:59:32.559433 containerd[1736]: time="2026-01-23T23:59:32.559403584Z" level=info msg="RemovePodSandbox for \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\"" Jan 23 23:59:32.559507 containerd[1736]: time="2026-01-23T23:59:32.559437504Z" level=info msg="Forcibly stopping sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\"" Jan 23 23:59:32.570929 containerd[1736]: time="2026-01-23T23:59:32.570888346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:32.575828 containerd[1736]: time="2026-01-23T23:59:32.575784306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:59:32.576256 containerd[1736]: time="2026-01-23T23:59:32.575865266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:59:32.576363 kubelet[3203]: E0123 23:59:32.576321 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:32.576878 kubelet[3203]: E0123 23:59:32.576370 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:32.576878 kubelet[3203]: E0123 23:59:32.576586 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:32.578208 containerd[1736]: time="2026-01-23T23:59:32.577576387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.608 [WARNING][5668] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30575e89-2706-4309-ac97-5d65652326e6", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"061ca5fc58da08e9f5c2083700621ab6deb7b1004d744c2df8cd1963889009f1", Pod:"coredns-668d6bf9bc-w8mnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide2fd386c59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.608 [INFO][5668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.608 [INFO][5668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" iface="eth0" netns="" Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.608 [INFO][5668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.608 [INFO][5668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.630 [INFO][5676] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" HandleID="k8s-pod-network.abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.630 [INFO][5676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.630 [INFO][5676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.638 [WARNING][5676] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" HandleID="k8s-pod-network.abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.639 [INFO][5676] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" HandleID="k8s-pod-network.abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Workload="ci--4081.3.6--n--95a9bf6543-k8s-coredns--668d6bf9bc--w8mnm-eth0" Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.640 [INFO][5676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:32.643250 containerd[1736]: 2026-01-23 23:59:32.641 [INFO][5668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766" Jan 23 23:59:32.643723 containerd[1736]: time="2026-01-23T23:59:32.643298076Z" level=info msg="TearDown network for sandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\" successfully" Jan 23 23:59:32.656772 containerd[1736]: time="2026-01-23T23:59:32.656723518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:32.656902 containerd[1736]: time="2026-01-23T23:59:32.656791518Z" level=info msg="RemovePodSandbox \"abf80cbf21647de25f5ffc3f6681180d2ba0a92b0e2713b1717f5cded1f61766\" returns successfully" Jan 23 23:59:32.657255 containerd[1736]: time="2026-01-23T23:59:32.657231518Z" level=info msg="StopPodSandbox for \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\"" Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.693 [WARNING][5690] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0", GenerateName:"calico-kube-controllers-6977ffbc55-", Namespace:"calico-system", SelfLink:"", UID:"a31be8f9-573e-4955-99b0-981cca2e99b2", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6977ffbc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51", Pod:"calico-kube-controllers-6977ffbc55-s4jdp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibaee1e843c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.693 [INFO][5690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.693 [INFO][5690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" iface="eth0" netns="" Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.693 [INFO][5690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.693 [INFO][5690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.718 [INFO][5697] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" HandleID="k8s-pod-network.b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.718 [INFO][5697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.718 [INFO][5697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.729 [WARNING][5697] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" HandleID="k8s-pod-network.b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.729 [INFO][5697] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" HandleID="k8s-pod-network.b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.732 [INFO][5697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:32.739095 containerd[1736]: 2026-01-23 23:59:32.736 [INFO][5690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:32.739580 containerd[1736]: time="2026-01-23T23:59:32.739144371Z" level=info msg="TearDown network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\" successfully" Jan 23 23:59:32.739580 containerd[1736]: time="2026-01-23T23:59:32.739169771Z" level=info msg="StopPodSandbox for \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\" returns successfully" Jan 23 23:59:32.739635 containerd[1736]: time="2026-01-23T23:59:32.739601971Z" level=info msg="RemovePodSandbox for \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\"" Jan 23 23:59:32.739635 containerd[1736]: time="2026-01-23T23:59:32.739628451Z" level=info msg="Forcibly stopping sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\"" Jan 23 23:59:32.819255 containerd[1736]: time="2026-01-23T23:59:32.818864102Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:32.821500 containerd[1736]: time="2026-01-23T23:59:32.821338583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:59:32.821500 containerd[1736]: time="2026-01-23T23:59:32.821409663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:59:32.821643 kubelet[3203]: E0123 23:59:32.821594 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:59:32.822218 kubelet[3203]: E0123 23:59:32.821637 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:59:32.822218 kubelet[3203]: E0123 23:59:32.821829 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmjjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6977ffbc55-s4jdp_calico-system(a31be8f9-573e-4955-99b0-981cca2e99b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:32.822747 containerd[1736]: time="2026-01-23T23:59:32.822681063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:59:32.824031 kubelet[3203]: E0123 23:59:32.823995 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.804 [WARNING][5712] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0", GenerateName:"calico-kube-controllers-6977ffbc55-", Namespace:"calico-system", SelfLink:"", UID:"a31be8f9-573e-4955-99b0-981cca2e99b2", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6977ffbc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"6fbe95f675bce86efda750be1f3be30fb73403bf18c51a83a71477c9aaad5d51", Pod:"calico-kube-controllers-6977ffbc55-s4jdp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibaee1e843c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.805 [INFO][5712] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.805 [INFO][5712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" iface="eth0" netns="" Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.805 [INFO][5712] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.805 [INFO][5712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.842 [INFO][5720] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" HandleID="k8s-pod-network.b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.843 [INFO][5720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.843 [INFO][5720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.858 [WARNING][5720] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" HandleID="k8s-pod-network.b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.858 [INFO][5720] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" HandleID="k8s-pod-network.b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--kube--controllers--6977ffbc55--s4jdp-eth0" Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.861 [INFO][5720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:32.865129 containerd[1736]: 2026-01-23 23:59:32.862 [INFO][5712] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19" Jan 23 23:59:32.865566 containerd[1736]: time="2026-01-23T23:59:32.865179229Z" level=info msg="TearDown network for sandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\" successfully" Jan 23 23:59:32.872691 containerd[1736]: time="2026-01-23T23:59:32.872648510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:32.872815 containerd[1736]: time="2026-01-23T23:59:32.872730710Z" level=info msg="RemovePodSandbox \"b78de2239c894056453ddbb9ca1cce17af56d872bd6e33d6d4d9be349a5afd19\" returns successfully" Jan 23 23:59:32.873764 containerd[1736]: time="2026-01-23T23:59:32.873199590Z" level=info msg="StopPodSandbox for \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\"" Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.908 [WARNING][5735] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.908 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.908 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" iface="eth0" netns="" Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.908 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.908 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.929 [INFO][5742] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" HandleID="k8s-pod-network.bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.929 [INFO][5742] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.929 [INFO][5742] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.938 [WARNING][5742] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" HandleID="k8s-pod-network.bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.938 [INFO][5742] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" HandleID="k8s-pod-network.bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.942 [INFO][5742] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:32.946082 containerd[1736]: 2026-01-23 23:59:32.944 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:32.946447 containerd[1736]: time="2026-01-23T23:59:32.946129721Z" level=info msg="TearDown network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\" successfully" Jan 23 23:59:32.946447 containerd[1736]: time="2026-01-23T23:59:32.946153561Z" level=info msg="StopPodSandbox for \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\" returns successfully" Jan 23 23:59:32.947320 containerd[1736]: time="2026-01-23T23:59:32.947297641Z" level=info msg="RemovePodSandbox for \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\"" Jan 23 23:59:32.947386 containerd[1736]: time="2026-01-23T23:59:32.947326801Z" level=info msg="Forcibly stopping sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\"" Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:32.984 [WARNING][5756] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" WorkloadEndpoint="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:32.984 [INFO][5756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:32.984 [INFO][5756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" iface="eth0" netns="" Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:32.984 [INFO][5756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:32.984 [INFO][5756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:33.002 [INFO][5764] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" HandleID="k8s-pod-network.bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:33.003 [INFO][5764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:33.003 [INFO][5764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:33.013 [WARNING][5764] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" HandleID="k8s-pod-network.bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:33.013 [INFO][5764] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" HandleID="k8s-pod-network.bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Workload="ci--4081.3.6--n--95a9bf6543-k8s-whisker--6fdf9dbdcc--dnl8w-eth0" Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:33.014 [INFO][5764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.017761 containerd[1736]: 2026-01-23 23:59:33.016 [INFO][5756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97" Jan 23 23:59:33.018733 containerd[1736]: time="2026-01-23T23:59:33.017806652Z" level=info msg="TearDown network for sandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\" successfully" Jan 23 23:59:33.038159 containerd[1736]: time="2026-01-23T23:59:33.038110375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:33.038296 containerd[1736]: time="2026-01-23T23:59:33.038171535Z" level=info msg="RemovePodSandbox \"bd129d78315d50ab977b576938c3e1e0a8f1ac9bbcbd75c205907003dc87ed97\" returns successfully" Jan 23 23:59:33.038713 containerd[1736]: time="2026-01-23T23:59:33.038688975Z" level=info msg="StopPodSandbox for \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\"" Jan 23 23:59:33.091669 containerd[1736]: time="2026-01-23T23:59:33.090902103Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:33.096199 containerd[1736]: time="2026-01-23T23:59:33.096148623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:59:33.096305 containerd[1736]: time="2026-01-23T23:59:33.096255144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:59:33.098583 kubelet[3203]: E0123 23:59:33.097074 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:33.098583 kubelet[3203]: E0123 23:59:33.097123 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:33.098583 kubelet[3203]: E0123 23:59:33.097234 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:33.101098 kubelet[3203]: E0123 23:59:33.101027 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.077 [WARNING][5778] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0", GenerateName:"calico-apiserver-674d7cd84f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0b5e5a7-1acb-4d63-8673-57e3c939b318", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674d7cd84f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe", Pod:"calico-apiserver-674d7cd84f-5hq44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid39944a0de9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.077 [INFO][5778] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.077 [INFO][5778] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" iface="eth0" netns="" Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.077 [INFO][5778] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.077 [INFO][5778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.104 [INFO][5785] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" HandleID="k8s-pod-network.8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.104 [INFO][5785] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.104 [INFO][5785] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.117 [WARNING][5785] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" HandleID="k8s-pod-network.8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.117 [INFO][5785] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" HandleID="k8s-pod-network.8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.118 [INFO][5785] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.122342 containerd[1736]: 2026-01-23 23:59:33.120 [INFO][5778] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:33.122749 containerd[1736]: time="2026-01-23T23:59:33.122377747Z" level=info msg="TearDown network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\" successfully" Jan 23 23:59:33.122749 containerd[1736]: time="2026-01-23T23:59:33.122418347Z" level=info msg="StopPodSandbox for \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\" returns successfully" Jan 23 23:59:33.123290 containerd[1736]: time="2026-01-23T23:59:33.123258948Z" level=info msg="RemovePodSandbox for \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\"" Jan 23 23:59:33.123350 containerd[1736]: time="2026-01-23T23:59:33.123291548Z" level=info msg="Forcibly stopping sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\"" Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.162 [WARNING][5799] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0", GenerateName:"calico-apiserver-674d7cd84f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0b5e5a7-1acb-4d63-8673-57e3c939b318", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674d7cd84f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"51b5c1e593fc8d752e32977b062f81038d65c7e1555d956457185ad42a764efe", Pod:"calico-apiserver-674d7cd84f-5hq44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid39944a0de9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.163 [INFO][5799] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.163 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" iface="eth0" netns="" Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.163 [INFO][5799] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.163 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.190 [INFO][5806] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" HandleID="k8s-pod-network.8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.190 [INFO][5806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.190 [INFO][5806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.206 [WARNING][5806] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" HandleID="k8s-pod-network.8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.206 [INFO][5806] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" HandleID="k8s-pod-network.8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--674d7cd84f--5hq44-eth0" Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.213 [INFO][5806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.217990 containerd[1736]: 2026-01-23 23:59:33.216 [INFO][5799] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b" Jan 23 23:59:33.218399 containerd[1736]: time="2026-01-23T23:59:33.218026642Z" level=info msg="TearDown network for sandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\" successfully" Jan 23 23:59:33.228006 containerd[1736]: time="2026-01-23T23:59:33.227960683Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:33.228113 containerd[1736]: time="2026-01-23T23:59:33.228024243Z" level=info msg="RemovePodSandbox \"8efa007094fbb378cb3871e24cb0a686cc1015afeb13d0a40270887e68c9cc0b\" returns successfully" Jan 23 23:59:33.228529 containerd[1736]: time="2026-01-23T23:59:33.228499083Z" level=info msg="StopPodSandbox for \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\"" Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.293 [WARNING][5820] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"693475f7-1f52-409e-89ad-83367b27d7ef", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77", Pod:"goldmane-666569f655-27fdn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califc09ccfe405", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.294 [INFO][5820] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.294 [INFO][5820] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" iface="eth0" netns="" Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.294 [INFO][5820] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.294 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.331 [INFO][5827] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" HandleID="k8s-pod-network.31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.333 [INFO][5827] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.333 [INFO][5827] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.344 [WARNING][5827] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" HandleID="k8s-pod-network.31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.344 [INFO][5827] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" HandleID="k8s-pod-network.31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.345 [INFO][5827] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.355179 containerd[1736]: 2026-01-23 23:59:33.352 [INFO][5820] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:33.356981 containerd[1736]: time="2026-01-23T23:59:33.355150262Z" level=info msg="TearDown network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\" successfully" Jan 23 23:59:33.356981 containerd[1736]: time="2026-01-23T23:59:33.356035022Z" level=info msg="StopPodSandbox for \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\" returns successfully" Jan 23 23:59:33.356981 containerd[1736]: time="2026-01-23T23:59:33.356373822Z" level=info msg="RemovePodSandbox for \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\"" Jan 23 23:59:33.356981 containerd[1736]: time="2026-01-23T23:59:33.356398262Z" level=info msg="Forcibly stopping sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\"" Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.400 [WARNING][5841] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"693475f7-1f52-409e-89ad-83367b27d7ef", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"4be594efffce653b5be6dbf570dba8a42b24f32a1e168dba24296d6a6a632c77", Pod:"goldmane-666569f655-27fdn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califc09ccfe405", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.400 [INFO][5841] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.400 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" iface="eth0" netns="" Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.400 [INFO][5841] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.400 [INFO][5841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.423 [INFO][5848] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" HandleID="k8s-pod-network.31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.423 [INFO][5848] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.423 [INFO][5848] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.435 [WARNING][5848] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" HandleID="k8s-pod-network.31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.435 [INFO][5848] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" HandleID="k8s-pod-network.31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Workload="ci--4081.3.6--n--95a9bf6543-k8s-goldmane--666569f655--27fdn-eth0" Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.436 [INFO][5848] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.442001 containerd[1736]: 2026-01-23 23:59:33.439 [INFO][5841] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0" Jan 23 23:59:33.442001 containerd[1736]: time="2026-01-23T23:59:33.441547515Z" level=info msg="TearDown network for sandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\" successfully" Jan 23 23:59:33.451770 containerd[1736]: time="2026-01-23T23:59:33.451731396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:33.452046 containerd[1736]: time="2026-01-23T23:59:33.451934676Z" level=info msg="RemovePodSandbox \"31c3053f45333c926f937c1defa0d3323aa1664674bb32ea7c513f512cbacba0\" returns successfully" Jan 23 23:59:33.452468 containerd[1736]: time="2026-01-23T23:59:33.452445476Z" level=info msg="StopPodSandbox for \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\"" Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.496 [WARNING][5862] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0", GenerateName:"calico-apiserver-5f88658b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"251b4c3c-e8df-4086-8bfb-8297ee672eec", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88658b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af", Pod:"calico-apiserver-5f88658b6c-q6dt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibed1104d4c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.496 [INFO][5862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.496 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" iface="eth0" netns="" Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.496 [INFO][5862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.496 [INFO][5862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.523 [INFO][5869] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" HandleID="k8s-pod-network.ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.523 [INFO][5869] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.523 [INFO][5869] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.531 [WARNING][5869] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" HandleID="k8s-pod-network.ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.531 [INFO][5869] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" HandleID="k8s-pod-network.ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.533 [INFO][5869] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.538252 containerd[1736]: 2026-01-23 23:59:33.535 [INFO][5862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:33.538252 containerd[1736]: time="2026-01-23T23:59:33.537683889Z" level=info msg="TearDown network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\" successfully" Jan 23 23:59:33.538252 containerd[1736]: time="2026-01-23T23:59:33.537714969Z" level=info msg="StopPodSandbox for \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\" returns successfully" Jan 23 23:59:33.538702 containerd[1736]: time="2026-01-23T23:59:33.538318969Z" level=info msg="RemovePodSandbox for \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\"" Jan 23 23:59:33.538702 containerd[1736]: time="2026-01-23T23:59:33.538376889Z" level=info msg="Forcibly stopping sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\"" Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.581 [WARNING][5883] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0", GenerateName:"calico-apiserver-5f88658b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"251b4c3c-e8df-4086-8bfb-8297ee672eec", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88658b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"9bfeeb5e03b34d1fb2ad0108686477cefaa9c062ac6130046f642fb8e7b1c9af", Pod:"calico-apiserver-5f88658b6c-q6dt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibed1104d4c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.581 [INFO][5883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.581 [INFO][5883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" iface="eth0" netns="" Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.581 [INFO][5883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.581 [INFO][5883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.602 [INFO][5890] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" HandleID="k8s-pod-network.ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.602 [INFO][5890] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.603 [INFO][5890] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.612 [WARNING][5890] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" HandleID="k8s-pod-network.ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.612 [INFO][5890] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" HandleID="k8s-pod-network.ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--q6dt5-eth0" Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.613 [INFO][5890] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.616850 containerd[1736]: 2026-01-23 23:59:33.615 [INFO][5883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b" Jan 23 23:59:33.619214 containerd[1736]: time="2026-01-23T23:59:33.616821341Z" level=info msg="TearDown network for sandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\" successfully" Jan 23 23:59:33.631973 containerd[1736]: time="2026-01-23T23:59:33.631848863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:33.631973 containerd[1736]: time="2026-01-23T23:59:33.631913623Z" level=info msg="RemovePodSandbox \"ecf669ee7729ad9955147e0f23f9dba630df47ebf06c8f107a3732476603338b\" returns successfully" Jan 23 23:59:33.636092 containerd[1736]: time="2026-01-23T23:59:33.632343383Z" level=info msg="StopPodSandbox for \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\"" Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.683 [WARNING][5904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89876e47-5c25-4ed8-975b-aadadd46d2c9", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81", Pod:"csi-node-driver-phrmd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic24195076a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.683 [INFO][5904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.683 [INFO][5904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" iface="eth0" netns="" Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.683 [INFO][5904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.683 [INFO][5904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.704 [INFO][5911] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" HandleID="k8s-pod-network.df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.705 [INFO][5911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.705 [INFO][5911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.716 [WARNING][5911] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" HandleID="k8s-pod-network.df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.716 [INFO][5911] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" HandleID="k8s-pod-network.df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.720 [INFO][5911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.727823 containerd[1736]: 2026-01-23 23:59:33.722 [INFO][5904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:33.728240 containerd[1736]: time="2026-01-23T23:59:33.727874397Z" level=info msg="TearDown network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\" successfully" Jan 23 23:59:33.728240 containerd[1736]: time="2026-01-23T23:59:33.727898957Z" level=info msg="StopPodSandbox for \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\" returns successfully" Jan 23 23:59:33.728857 containerd[1736]: time="2026-01-23T23:59:33.728829837Z" level=info msg="RemovePodSandbox for \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\"" Jan 23 23:59:33.728992 containerd[1736]: time="2026-01-23T23:59:33.728869357Z" level=info msg="Forcibly stopping sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\"" Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.772 [WARNING][5926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89876e47-5c25-4ed8-975b-aadadd46d2c9", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"b3a0c962600c981cad85c19639703ca55a689c4f91e5d691bfc606d960bdbe81", Pod:"csi-node-driver-phrmd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic24195076a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.772 [INFO][5926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.773 [INFO][5926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" iface="eth0" netns="" Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.773 [INFO][5926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.773 [INFO][5926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.797 [INFO][5933] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" HandleID="k8s-pod-network.df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.797 [INFO][5933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.797 [INFO][5933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.807 [WARNING][5933] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" HandleID="k8s-pod-network.df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.807 [INFO][5933] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" HandleID="k8s-pod-network.df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Workload="ci--4081.3.6--n--95a9bf6543-k8s-csi--node--driver--phrmd-eth0" Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.808 [INFO][5933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.812203 containerd[1736]: 2026-01-23 23:59:33.810 [INFO][5926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a" Jan 23 23:59:33.812680 containerd[1736]: time="2026-01-23T23:59:33.812357450Z" level=info msg="TearDown network for sandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\" successfully" Jan 23 23:59:33.818989 containerd[1736]: time="2026-01-23T23:59:33.818934291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:33.819967 containerd[1736]: time="2026-01-23T23:59:33.818992451Z" level=info msg="RemovePodSandbox \"df5ab2474e5311d6b76f8519d634ad2b9fb885b46ccd32c656e371bb45bd8f5a\" returns successfully" Jan 23 23:59:33.820442 containerd[1736]: time="2026-01-23T23:59:33.820185091Z" level=info msg="StopPodSandbox for \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\"" Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.873 [WARNING][5947] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0", GenerateName:"calico-apiserver-5f88658b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"849bc66d-ccf9-400e-bccb-fea5f90abeb0", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88658b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450", Pod:"calico-apiserver-5f88658b6c-p27j5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7e937803c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.873 [INFO][5947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.873 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" iface="eth0" netns="" Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.873 [INFO][5947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.873 [INFO][5947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.899 [INFO][5954] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" HandleID="k8s-pod-network.f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.900 [INFO][5954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.900 [INFO][5954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.908 [WARNING][5954] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" HandleID="k8s-pod-network.f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.908 [INFO][5954] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" HandleID="k8s-pod-network.f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.909 [INFO][5954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.915034 containerd[1736]: 2026-01-23 23:59:33.911 [INFO][5947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:33.915034 containerd[1736]: time="2026-01-23T23:59:33.913895305Z" level=info msg="TearDown network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\" successfully" Jan 23 23:59:33.915034 containerd[1736]: time="2026-01-23T23:59:33.913919905Z" level=info msg="StopPodSandbox for \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\" returns successfully" Jan 23 23:59:33.917633 containerd[1736]: time="2026-01-23T23:59:33.917598665Z" level=info msg="RemovePodSandbox for \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\"" Jan 23 23:59:33.917633 containerd[1736]: time="2026-01-23T23:59:33.917633105Z" level=info msg="Forcibly stopping sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\"" Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.951 [WARNING][5968] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0", GenerateName:"calico-apiserver-5f88658b6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"849bc66d-ccf9-400e-bccb-fea5f90abeb0", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f88658b6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-95a9bf6543", ContainerID:"4b96c11535361a87a67ef5c072ed0e31af9bb1c5d6f1ed3614b84daddb2fe450", Pod:"calico-apiserver-5f88658b6c-p27j5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7e937803c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.951 [INFO][5968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.951 [INFO][5968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" iface="eth0" netns="" Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.951 [INFO][5968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.951 [INFO][5968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.972 [INFO][5975] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" HandleID="k8s-pod-network.f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.972 [INFO][5975] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.972 [INFO][5975] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.981 [WARNING][5975] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" HandleID="k8s-pod-network.f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.981 [INFO][5975] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" HandleID="k8s-pod-network.f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Workload="ci--4081.3.6--n--95a9bf6543-k8s-calico--apiserver--5f88658b6c--p27j5-eth0" Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.982 [INFO][5975] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:59:33.988517 containerd[1736]: 2026-01-23 23:59:33.986 [INFO][5968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4" Jan 23 23:59:33.988917 containerd[1736]: time="2026-01-23T23:59:33.988555756Z" level=info msg="TearDown network for sandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\" successfully" Jan 23 23:59:33.996778 containerd[1736]: time="2026-01-23T23:59:33.996717597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:33.996845 containerd[1736]: time="2026-01-23T23:59:33.996798317Z" level=info msg="RemovePodSandbox \"f3a08a7e3d4145589e8082d53132a37a3cb9876c7d50fbc63c381f4913427ef4\" returns successfully" Jan 23 23:59:34.270959 containerd[1736]: time="2026-01-23T23:59:34.270737758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:34.555499 containerd[1736]: time="2026-01-23T23:59:34.555382480Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:34.559087 containerd[1736]: time="2026-01-23T23:59:34.559037961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:34.559171 containerd[1736]: time="2026-01-23T23:59:34.559142121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:34.560147 kubelet[3203]: E0123 23:59:34.560103 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:34.560425 kubelet[3203]: E0123 23:59:34.560151 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:34.560425 kubelet[3203]: E0123 23:59:34.560288 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtpcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674d7cd84f-5hq44_calico-apiserver(e0b5e5a7-1acb-4d63-8673-57e3c939b318): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:34.561702 kubelet[3203]: E0123 23:59:34.561657 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 23 23:59:35.269327 containerd[1736]: time="2026-01-23T23:59:35.269234506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:35.539811 containerd[1736]: time="2026-01-23T23:59:35.539696626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:35.545724 containerd[1736]: time="2026-01-23T23:59:35.545678347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:35.545807 containerd[1736]: time="2026-01-23T23:59:35.545771867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:35.546154 kubelet[3203]: E0123 23:59:35.545896 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:35.546154 kubelet[3203]: E0123 23:59:35.545959 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:35.546154 kubelet[3203]: E0123 23:59:35.546087 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrx8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88658b6c-p27j5_calico-apiserver(849bc66d-ccf9-400e-bccb-fea5f90abeb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:35.547293 kubelet[3203]: E0123 23:59:35.547229 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 23 23:59:38.274828 containerd[1736]: time="2026-01-23T23:59:38.274764752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:59:38.512210 containerd[1736]: time="2026-01-23T23:59:38.512091988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:38.515917 containerd[1736]: time="2026-01-23T23:59:38.515772989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:59:38.515917 containerd[1736]: time="2026-01-23T23:59:38.515854589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:38.516328 kubelet[3203]: E0123 23:59:38.516104 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:59:38.516328 kubelet[3203]: E0123 23:59:38.516163 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:59:38.516328 kubelet[3203]: E0123 23:59:38.516284 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nbqht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-27fdn_calico-system(693475f7-1f52-409e-89ad-83367b27d7ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:38.517925 kubelet[3203]: E0123 23:59:38.517825 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 23 23:59:41.269968 kubelet[3203]: E0123 23:59:41.269491 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 23 23:59:41.270979 kubelet[3203]: E0123 23:59:41.270887 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 23 23:59:44.275296 kubelet[3203]: E0123 23:59:44.275191 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:45.268711 kubelet[3203]: E0123 23:59:45.268559 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 23 23:59:46.601086 systemd[1]: Started sshd@7-10.200.20.27:22-10.200.16.10:37428.service - OpenSSH per-connection server daemon (10.200.16.10:37428). Jan 23 23:59:47.096569 sshd[5996]: Accepted publickey for core from 10.200.16.10 port 37428 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:47.099544 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:47.106019 systemd-logind[1715]: New session 10 of user core. Jan 23 23:59:47.111338 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:59:47.574276 sshd[5996]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:47.579749 systemd[1]: sshd@7-10.200.20.27:22-10.200.16.10:37428.service: Deactivated successfully. Jan 23 23:59:47.583532 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:59:47.586660 systemd-logind[1715]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:59:47.588409 systemd-logind[1715]: Removed session 10. Jan 23 23:59:48.271755 kubelet[3203]: E0123 23:59:48.270821 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 23 23:59:50.269067 kubelet[3203]: E0123 23:59:50.268788 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 23 23:59:51.269008 kubelet[3203]: E0123 23:59:51.268663 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 23 23:59:52.271179 containerd[1736]: time="2026-01-23T23:59:52.271116670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:59:52.508497 containerd[1736]: time="2026-01-23T23:59:52.507838811Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:52.512280 containerd[1736]: time="2026-01-23T23:59:52.512085132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:59:52.512599 containerd[1736]: time="2026-01-23T23:59:52.512228332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:59:52.513641 kubelet[3203]: E0123 23:59:52.513211 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:59:52.514676 kubelet[3203]: E0123 23:59:52.513648 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:59:52.514676 kubelet[3203]: E0123 23:59:52.513927 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7e3e9b460e424236a2b5a2375c5d7b77,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787b66fb85-crtpt_calico-system(8ee41f25-89f1-4519-b99e-33fdb651ce3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:52.518953 containerd[1736]: time="2026-01-23T23:59:52.517217973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:59:52.668312 systemd[1]: Started sshd@8-10.200.20.27:22-10.200.16.10:36434.service - OpenSSH per-connection server daemon (10.200.16.10:36434). Jan 23 23:59:52.797104 containerd[1736]: time="2026-01-23T23:59:52.797051765Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:52.799878 containerd[1736]: time="2026-01-23T23:59:52.799829046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:59:52.800016 containerd[1736]: time="2026-01-23T23:59:52.799961086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:59:52.802311 kubelet[3203]: E0123 23:59:52.802083 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:59:52.802311 kubelet[3203]: E0123 23:59:52.802137 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:59:52.802311 kubelet[3203]: E0123 23:59:52.802249 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787b66fb85-crtpt_calico-system(8ee41f25-89f1-4519-b99e-33fdb651ce3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:52.804761 kubelet[3203]: E0123 23:59:52.804726 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 23 23:59:53.156234 sshd[6013]: Accepted publickey for core from 10.200.16.10 port 36434 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:53.158055 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:53.161627 systemd-logind[1715]: New session 11 of user core. Jan 23 23:59:53.168107 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:59:53.600673 sshd[6013]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:53.604438 systemd-logind[1715]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:59:53.605094 systemd[1]: sshd@8-10.200.20.27:22-10.200.16.10:36434.service: Deactivated successfully. Jan 23 23:59:53.607402 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:59:53.608636 systemd-logind[1715]: Removed session 11. Jan 23 23:59:56.269180 containerd[1736]: time="2026-01-23T23:59:56.269063190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:56.528298 containerd[1736]: time="2026-01-23T23:59:56.528036560Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:56.530548 containerd[1736]: time="2026-01-23T23:59:56.530459481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:56.530548 containerd[1736]: time="2026-01-23T23:59:56.530534161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:56.530685 kubelet[3203]: E0123 23:59:56.530648 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:56.530973 kubelet[3203]: E0123 23:59:56.530693 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:56.530973 kubelet[3203]: E0123 23:59:56.530809 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js7r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88658b6c-q6dt5_calico-apiserver(251b4c3c-e8df-4086-8bfb-8297ee672eec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:56.532801 kubelet[3203]: E0123 23:59:56.532254 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 23 23:59:57.272473 containerd[1736]: time="2026-01-23T23:59:57.272266346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:59:57.568053 containerd[1736]: time="2026-01-23T23:59:57.567801763Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:57.571209 containerd[1736]: time="2026-01-23T23:59:57.571111644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:59:57.571209 containerd[1736]: time="2026-01-23T23:59:57.571184804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:59:57.571354 kubelet[3203]: E0123 23:59:57.571301 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:59:57.571671 kubelet[3203]: E0123 23:59:57.571365 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:59:57.571725 containerd[1736]: time="2026-01-23T23:59:57.571646964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:59:57.572727 kubelet[3203]: E0123 23:59:57.572640 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmjjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6977ffbc55-s4jdp_calico-system(a31be8f9-573e-4955-99b0-981cca2e99b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:57.574228 kubelet[3203]: E0123 23:59:57.574191 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 23 23:59:57.846238 containerd[1736]: time="2026-01-23T23:59:57.845765378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:57.847999 containerd[1736]: time="2026-01-23T23:59:57.847928498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:59:57.847999 containerd[1736]: time="2026-01-23T23:59:57.847972858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:59:57.848153 kubelet[3203]: E0123 23:59:57.848115 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:57.848206 kubelet[3203]: E0123 23:59:57.848163 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:57.848305 kubelet[3203]: E0123 23:59:57.848264 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:57.851111 containerd[1736]: time="2026-01-23T23:59:57.850543778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:59:58.124432 containerd[1736]: time="2026-01-23T23:59:58.124310992Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:58.127235 containerd[1736]: time="2026-01-23T23:59:58.127140552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:59:58.127235 containerd[1736]: time="2026-01-23T23:59:58.127217592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:59:58.127383 kubelet[3203]: E0123 23:59:58.127344 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:58.127432 kubelet[3203]: E0123 23:59:58.127387 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:58.127542 kubelet[3203]: E0123 23:59:58.127491 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:58.129563 kubelet[3203]: E0123 23:59:58.129523 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 23 23:59:58.689161 systemd[1]: Started sshd@9-10.200.20.27:22-10.200.16.10:36448.service - OpenSSH per-connection server daemon (10.200.16.10:36448). Jan 23 23:59:59.139176 sshd[6055]: Accepted publickey for core from 10.200.16.10 port 36448 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:59.140639 sshd[6055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:59.145431 systemd-logind[1715]: New session 12 of user core. Jan 23 23:59:59.151088 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:59:59.526166 sshd[6055]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:59.529620 systemd[1]: sshd@9-10.200.20.27:22-10.200.16.10:36448.service: Deactivated successfully. Jan 23 23:59:59.532762 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:59:59.533805 systemd-logind[1715]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:59:59.534873 systemd-logind[1715]: Removed session 12. Jan 23 23:59:59.623422 systemd[1]: Started sshd@10-10.200.20.27:22-10.200.16.10:42448.service - OpenSSH per-connection server daemon (10.200.16.10:42448). Jan 24 00:00:00.076214 sshd[6069]: Accepted publickey for core from 10.200.16.10 port 42448 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:00.078010 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:00.085277 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jan 24 00:00:00.093498 systemd-logind[1715]: New session 13 of user core. Jan 24 00:00:00.099102 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:00:00.107408 systemd[1]: logrotate.service: Deactivated successfully. Jan 24 00:00:00.269216 containerd[1736]: time="2026-01-24T00:00:00.269105730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:00:00.544072 sshd[6069]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:00.550637 systemd[1]: sshd@10-10.200.20.27:22-10.200.16.10:42448.service: Deactivated successfully. Jan 24 00:00:00.553892 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:00:00.556829 systemd-logind[1715]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:00:00.558024 systemd-logind[1715]: Removed session 13. Jan 24 00:00:00.634318 systemd[1]: Started sshd@11-10.200.20.27:22-10.200.16.10:42450.service - OpenSSH per-connection server daemon (10.200.16.10:42450). Jan 24 00:00:00.939004 containerd[1736]: time="2026-01-24T00:00:00.938064981Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:00.945708 containerd[1736]: time="2026-01-24T00:00:00.945480702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:00:00.945708 containerd[1736]: time="2026-01-24T00:00:00.945540062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:00.945849 kubelet[3203]: E0124 00:00:00.945732 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:00.945849 kubelet[3203]: E0124 00:00:00.945783 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:00.947025 kubelet[3203]: E0124 00:00:00.945904 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtpcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674d7cd84f-5hq44_calico-apiserver(e0b5e5a7-1acb-4d63-8673-57e3c939b318): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:00.947255 kubelet[3203]: E0124 00:00:00.947090 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 24 00:00:01.126536 sshd[6082]: Accepted publickey for core from 10.200.16.10 port 42450 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:01.127879 sshd[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:01.132011 systemd-logind[1715]: New session 14 of user core. Jan 24 00:00:01.137086 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:00:01.269688 containerd[1736]: time="2026-01-24T00:00:01.269480005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:00:01.587177 sshd[6082]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:01.592326 systemd-logind[1715]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:00:01.594076 systemd[1]: sshd@11-10.200.20.27:22-10.200.16.10:42450.service: Deactivated successfully. Jan 24 00:00:01.596145 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:00:01.597266 systemd-logind[1715]: Removed session 14. Jan 24 00:00:02.474059 containerd[1736]: time="2026-01-24T00:00:02.474012800Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:02.476475 containerd[1736]: time="2026-01-24T00:00:02.476439321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:00:02.476572 containerd[1736]: time="2026-01-24T00:00:02.476533561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:02.477372 kubelet[3203]: E0124 00:00:02.476697 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:02.477372 kubelet[3203]: E0124 00:00:02.476740 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:02.477372 kubelet[3203]: E0124 00:00:02.476934 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrx8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88658b6c-p27j5_calico-apiserver(849bc66d-ccf9-400e-bccb-fea5f90abeb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:02.478164 kubelet[3203]: E0124 00:00:02.478102 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 24 00:00:02.478347 containerd[1736]: time="2026-01-24T00:00:02.478196641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:00:03.269752 kubelet[3203]: E0124 00:00:03.269703 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 24 00:00:04.711418 containerd[1736]: time="2026-01-24T00:00:04.711376246Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:04.715537 containerd[1736]: time="2026-01-24T00:00:04.715481047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:00:04.715681 containerd[1736]: time="2026-01-24T00:00:04.715590607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:04.716327 kubelet[3203]: E0124 00:00:04.715818 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:00:04.716327 kubelet[3203]: E0124 00:00:04.715871 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:00:04.716327 kubelet[3203]: E0124 00:00:04.716003 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nbqht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-27fdn_calico-system(693475f7-1f52-409e-89ad-83367b27d7ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:04.717442 kubelet[3203]: E0124 00:00:04.717385 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 24 00:00:06.667240 systemd[1]: Started sshd@12-10.200.20.27:22-10.200.16.10:42462.service - OpenSSH per-connection server daemon (10.200.16.10:42462). Jan 24 00:00:07.136722 sshd[6102]: Accepted publickey for core from 10.200.16.10 port 42462 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:07.161642 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:07.168073 systemd-logind[1715]: New session 15 of user core. Jan 24 00:00:07.173105 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:00:07.546866 sshd[6102]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:07.550912 systemd-logind[1715]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:00:07.551279 systemd[1]: sshd@12-10.200.20.27:22-10.200.16.10:42462.service: Deactivated successfully. Jan 24 00:00:07.553136 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:00:07.554778 systemd-logind[1715]: Removed session 15. Jan 24 00:00:10.270121 kubelet[3203]: E0124 00:00:10.269713 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 24 00:00:11.269317 kubelet[3203]: E0124 00:00:11.269275 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 24 00:00:12.271289 kubelet[3203]: E0124 00:00:12.271227 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 24 00:00:12.644160 systemd[1]: Started sshd@13-10.200.20.27:22-10.200.16.10:52512.service - OpenSSH per-connection server daemon (10.200.16.10:52512). Jan 24 00:00:13.133620 sshd[6118]: Accepted publickey for core from 10.200.16.10 port 52512 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:13.158261 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:13.164038 systemd-logind[1715]: New session 16 of user core. Jan 24 00:00:13.171082 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:00:13.593165 sshd[6118]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:13.597200 systemd[1]: sshd@13-10.200.20.27:22-10.200.16.10:52512.service: Deactivated successfully. Jan 24 00:00:13.599340 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:00:13.600013 systemd-logind[1715]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:00:13.600762 systemd-logind[1715]: Removed session 16. Jan 24 00:00:14.272429 kubelet[3203]: E0124 00:00:14.272383 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 24 00:00:16.269973 kubelet[3203]: E0124 00:00:16.268876 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 24 00:00:16.270560 kubelet[3203]: E0124 00:00:16.268975 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 24 00:00:17.268700 kubelet[3203]: E0124 00:00:17.268657 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 24 00:00:18.680143 systemd[1]: Started sshd@14-10.200.20.27:22-10.200.16.10:52526.service - OpenSSH per-connection server daemon (10.200.16.10:52526). Jan 24 00:00:19.168440 sshd[6131]: Accepted publickey for core from 10.200.16.10 port 52526 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:19.172557 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:19.176238 systemd-logind[1715]: New session 17 of user core. Jan 24 00:00:19.185123 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:00:19.591225 sshd[6131]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:19.595672 systemd[1]: sshd@14-10.200.20.27:22-10.200.16.10:52526.service: Deactivated successfully. Jan 24 00:00:19.598516 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:00:19.600440 systemd-logind[1715]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:00:19.601676 systemd-logind[1715]: Removed session 17. Jan 24 00:00:22.270472 kubelet[3203]: E0124 00:00:22.270203 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 24 00:00:24.270734 kubelet[3203]: E0124 00:00:24.270651 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 24 00:00:24.272386 kubelet[3203]: E0124 00:00:24.272338 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 24 00:00:24.686258 systemd[1]: Started sshd@15-10.200.20.27:22-10.200.16.10:57286.service - OpenSSH per-connection server daemon (10.200.16.10:57286). Jan 24 00:00:25.184719 sshd[6144]: Accepted publickey for core from 10.200.16.10 port 57286 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:25.186678 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:25.191968 systemd-logind[1715]: New session 18 of user core. Jan 24 00:00:25.196093 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:00:25.616229 sshd[6144]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:25.621519 systemd[1]: sshd@15-10.200.20.27:22-10.200.16.10:57286.service: Deactivated successfully. Jan 24 00:00:25.626493 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:00:25.630570 systemd-logind[1715]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:00:25.633389 systemd-logind[1715]: Removed session 18. Jan 24 00:00:26.269236 kubelet[3203]: E0124 00:00:26.269155 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 24 00:00:27.455626 waagent[1916]: 2026-01-24T00:00:27.455539Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 24 00:00:27.465637 waagent[1916]: 2026-01-24T00:00:27.464789Z INFO ExtHandler Jan 24 00:00:27.465637 waagent[1916]: 2026-01-24T00:00:27.464891Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 24 00:00:27.527393 waagent[1916]: 2026-01-24T00:00:27.527345Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 24 00:00:27.601661 waagent[1916]: 2026-01-24T00:00:27.600726Z INFO ExtHandler Downloaded certificate {'thumbprint': '666102BC455F6C13462E801798B5420D6114C177', 'hasPrivateKey': True} Jan 24 00:00:27.603342 waagent[1916]: 2026-01-24T00:00:27.603286Z INFO ExtHandler Fetch goal state completed Jan 24 00:00:27.604671 waagent[1916]: 2026-01-24T00:00:27.603808Z INFO ExtHandler ExtHandler Jan 24 00:00:27.604891 waagent[1916]: 2026-01-24T00:00:27.604845Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 8ca5b5f5-7487-4510-a26b-67a65cfca6bb correlation d7740a77-13e7-4b28-acb5-342e59f0a4bb created: 2026-01-24T00:00:22.463654Z] Jan 24 00:00:27.606131 waagent[1916]: 2026-01-24T00:00:27.605341Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 24 00:00:27.606766 waagent[1916]: 2026-01-24T00:00:27.606726Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 2 ms] Jan 24 00:00:29.268246 kubelet[3203]: E0124 00:00:29.268198 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 24 00:00:29.269078 kubelet[3203]: E0124 00:00:29.268271 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 24 00:00:30.701255 systemd[1]: Started sshd@16-10.200.20.27:22-10.200.16.10:42282.service - OpenSSH per-connection server daemon (10.200.16.10:42282). Jan 24 00:00:31.152773 sshd[6184]: Accepted publickey for core from 10.200.16.10 port 42282 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:31.154157 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:31.158062 systemd-logind[1715]: New session 19 of user core. Jan 24 00:00:31.165094 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:00:31.270731 kubelet[3203]: E0124 00:00:31.270355 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 24 00:00:31.549813 sshd[6184]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:31.553191 systemd[1]: sshd@16-10.200.20.27:22-10.200.16.10:42282.service: Deactivated successfully. Jan 24 00:00:31.557670 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:00:31.558857 systemd-logind[1715]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:00:31.560126 systemd-logind[1715]: Removed session 19. Jan 24 00:00:33.634593 waagent[1916]: 2026-01-24T00:00:33.634482Z INFO ExtHandler Jan 24 00:00:33.634899 waagent[1916]: 2026-01-24T00:00:33.634828Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3b0c7787-e985-4e51-b29b-f1b354c345fe eTag: 4762716081301844164 source: Fabric] Jan 24 00:00:33.635350 waagent[1916]: 2026-01-24T00:00:33.635291Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 24 00:00:34.269900 kubelet[3203]: E0124 00:00:34.269862 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 24 00:00:35.268298 kubelet[3203]: E0124 00:00:35.267912 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 24 00:00:36.649587 systemd[1]: Started sshd@17-10.200.20.27:22-10.200.16.10:42284.service - OpenSSH per-connection server daemon (10.200.16.10:42284). Jan 24 00:00:37.139429 sshd[6205]: Accepted publickey for core from 10.200.16.10 port 42284 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:37.140866 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:37.146235 systemd-logind[1715]: New session 20 of user core. Jan 24 00:00:37.151110 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:00:37.579176 sshd[6205]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:37.584615 systemd[1]: sshd@17-10.200.20.27:22-10.200.16.10:42284.service: Deactivated successfully. Jan 24 00:00:37.587688 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:00:37.590338 systemd-logind[1715]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:00:37.592583 systemd-logind[1715]: Removed session 20. Jan 24 00:00:37.661216 systemd[1]: Started sshd@18-10.200.20.27:22-10.200.16.10:42296.service - OpenSSH per-connection server daemon (10.200.16.10:42296). Jan 24 00:00:38.126537 sshd[6218]: Accepted publickey for core from 10.200.16.10 port 42296 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:38.128081 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:38.131562 systemd-logind[1715]: New session 21 of user core. Jan 24 00:00:38.140091 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:00:38.272896 containerd[1736]: time="2026-01-24T00:00:38.272850585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:00:38.898960 sshd[6218]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:38.904681 systemd[1]: sshd@18-10.200.20.27:22-10.200.16.10:42296.service: Deactivated successfully. Jan 24 00:00:38.909308 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:00:38.911326 systemd-logind[1715]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:00:38.912568 systemd-logind[1715]: Removed session 21. Jan 24 00:00:38.985031 systemd[1]: Started sshd@19-10.200.20.27:22-10.200.16.10:42300.service - OpenSSH per-connection server daemon (10.200.16.10:42300). Jan 24 00:00:39.449534 sshd[6229]: Accepted publickey for core from 10.200.16.10 port 42300 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:39.451419 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:39.455571 systemd-logind[1715]: New session 22 of user core. Jan 24 00:00:39.464117 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:00:40.053722 containerd[1736]: time="2026-01-24T00:00:40.052990892Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:40.078964 containerd[1736]: time="2026-01-24T00:00:40.077910056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:00:40.078964 containerd[1736]: time="2026-01-24T00:00:40.077969537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:00:40.079134 kubelet[3203]: E0124 00:00:40.078128 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:00:40.079134 kubelet[3203]: E0124 00:00:40.078174 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:00:40.079134 kubelet[3203]: E0124 00:00:40.078276 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:40.081331 containerd[1736]: time="2026-01-24T00:00:40.081130617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:00:40.553076 sshd[6229]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:40.559586 systemd[1]: sshd@19-10.200.20.27:22-10.200.16.10:42300.service: Deactivated successfully. Jan 24 00:00:40.564879 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:00:40.566199 systemd-logind[1715]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:00:40.568367 systemd-logind[1715]: Removed session 22. Jan 24 00:00:40.659104 systemd[1]: Started sshd@20-10.200.20.27:22-10.200.16.10:60762.service - OpenSSH per-connection server daemon (10.200.16.10:60762). Jan 24 00:00:41.161814 sshd[6271]: Accepted publickey for core from 10.200.16.10 port 60762 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:41.163761 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:41.170930 systemd-logind[1715]: New session 23 of user core. Jan 24 00:00:41.174256 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:00:41.769296 sshd[6271]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:41.774194 systemd-logind[1715]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:00:41.775011 systemd[1]: sshd@20-10.200.20.27:22-10.200.16.10:60762.service: Deactivated successfully. Jan 24 00:00:41.778714 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:00:41.780998 systemd-logind[1715]: Removed session 23. Jan 24 00:00:41.853341 systemd[1]: Started sshd@21-10.200.20.27:22-10.200.16.10:60772.service - OpenSSH per-connection server daemon (10.200.16.10:60772). Jan 24 00:00:42.308339 sshd[6282]: Accepted publickey for core from 10.200.16.10 port 60772 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:42.309328 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:42.313860 systemd-logind[1715]: New session 24 of user core. Jan 24 00:00:42.322251 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:00:42.707089 sshd[6282]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:42.710486 systemd-logind[1715]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:00:42.711013 systemd[1]: sshd@21-10.200.20.27:22-10.200.16.10:60772.service: Deactivated successfully. Jan 24 00:00:42.713932 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:00:42.715724 systemd-logind[1715]: Removed session 24. Jan 24 00:00:42.843631 containerd[1736]: time="2026-01-24T00:00:42.843585195Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:42.846023 containerd[1736]: time="2026-01-24T00:00:42.845983235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:00:42.846125 containerd[1736]: time="2026-01-24T00:00:42.846074115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:00:42.846967 kubelet[3203]: E0124 00:00:42.846259 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:00:42.846967 kubelet[3203]: E0124 00:00:42.846314 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:00:42.846967 kubelet[3203]: E0124 00:00:42.846499 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmgrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-phrmd_calico-system(89876e47-5c25-4ed8-975b-aadadd46d2c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:42.847499 containerd[1736]: time="2026-01-24T00:00:42.846907875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:00:42.848347 kubelet[3203]: E0124 00:00:42.848296 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 24 00:00:43.134301 containerd[1736]: time="2026-01-24T00:00:43.134086211Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:43.138168 containerd[1736]: time="2026-01-24T00:00:43.138072972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:00:43.138168 containerd[1736]: time="2026-01-24T00:00:43.138132452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:00:43.138296 kubelet[3203]: E0124 00:00:43.138264 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:00:43.138343 kubelet[3203]: E0124 00:00:43.138309 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:00:43.138955 kubelet[3203]: E0124 00:00:43.138503 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7e3e9b460e424236a2b5a2375c5d7b77,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787b66fb85-crtpt_calico-system(8ee41f25-89f1-4519-b99e-33fdb651ce3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:43.139071 containerd[1736]: time="2026-01-24T00:00:43.138732132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:00:43.765765 containerd[1736]: time="2026-01-24T00:00:43.765593294Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:43.768384 containerd[1736]: time="2026-01-24T00:00:43.768284975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:00:43.768384 containerd[1736]: time="2026-01-24T00:00:43.768358375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:43.769276 kubelet[3203]: E0124 00:00:43.768642 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:43.769276 kubelet[3203]: E0124 00:00:43.768690 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:43.769276 kubelet[3203]: E0124 00:00:43.768894 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtpcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674d7cd84f-5hq44_calico-apiserver(e0b5e5a7-1acb-4d63-8673-57e3c939b318): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:43.770934 kubelet[3203]: E0124 00:00:43.770905 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 24 00:00:43.776404 containerd[1736]: time="2026-01-24T00:00:43.776149576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:00:44.045850 containerd[1736]: time="2026-01-24T00:00:44.045724669Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:44.049334 containerd[1736]: time="2026-01-24T00:00:44.049201029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:00:44.049334 containerd[1736]: time="2026-01-24T00:00:44.049285029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:00:44.049478 kubelet[3203]: E0124 00:00:44.049434 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:00:44.049716 kubelet[3203]: E0124 00:00:44.049482 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:00:44.049716 kubelet[3203]: E0124 00:00:44.049586 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsm26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-787b66fb85-crtpt_calico-system(8ee41f25-89f1-4519-b99e-33fdb651ce3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:44.050848 kubelet[3203]: E0124 00:00:44.050808 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 24 00:00:44.271567 containerd[1736]: time="2026-01-24T00:00:44.271095353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:00:44.271682 kubelet[3203]: E0124 00:00:44.271079 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 24 00:00:44.555301 containerd[1736]: time="2026-01-24T00:00:44.555250368Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:44.559675 containerd[1736]: time="2026-01-24T00:00:44.559628329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:00:44.559749 containerd[1736]: time="2026-01-24T00:00:44.559723649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:44.560983 kubelet[3203]: E0124 00:00:44.559852 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:44.560983 kubelet[3203]: E0124 00:00:44.559899 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:44.560983 kubelet[3203]: E0124 00:00:44.560058 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrx8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88658b6c-p27j5_calico-apiserver(849bc66d-ccf9-400e-bccb-fea5f90abeb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:44.561530 kubelet[3203]: E0124 00:00:44.561491 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 24 00:00:47.790245 systemd[1]: Started sshd@22-10.200.20.27:22-10.200.16.10:60782.service - OpenSSH per-connection server daemon (10.200.16.10:60782). Jan 24 00:00:48.245146 sshd[6321]: Accepted publickey for core from 10.200.16.10 port 60782 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:48.247023 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:48.254711 systemd-logind[1715]: New session 25 of user core. Jan 24 00:00:48.261148 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:00:48.273369 containerd[1736]: time="2026-01-24T00:00:48.273142012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:00:48.654375 sshd[6321]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:48.657761 systemd[1]: sshd@22-10.200.20.27:22-10.200.16.10:60782.service: Deactivated successfully. Jan 24 00:00:48.661446 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:00:48.663001 systemd-logind[1715]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:00:48.664048 systemd-logind[1715]: Removed session 25. Jan 24 00:00:49.153935 containerd[1736]: time="2026-01-24T00:00:49.153875343Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:49.156707 containerd[1736]: time="2026-01-24T00:00:49.156673064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:00:49.156870 containerd[1736]: time="2026-01-24T00:00:49.156757264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:00:49.156900 kubelet[3203]: E0124 00:00:49.156863 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:00:49.157243 kubelet[3203]: E0124 00:00:49.156906 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:00:49.157243 kubelet[3203]: E0124 00:00:49.157158 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmjjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6977ffbc55-s4jdp_calico-system(a31be8f9-573e-4955-99b0-981cca2e99b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:49.158069 containerd[1736]: time="2026-01-24T00:00:49.157801904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:00:49.158348 kubelet[3203]: E0124 00:00:49.158321 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 24 00:00:49.424247 containerd[1736]: time="2026-01-24T00:00:49.424131716Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:49.430549 containerd[1736]: time="2026-01-24T00:00:49.430499597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:00:49.430693 containerd[1736]: time="2026-01-24T00:00:49.430603357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:49.431174 kubelet[3203]: E0124 00:00:49.430733 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:49.431174 kubelet[3203]: E0124 00:00:49.430778 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:00:49.431174 kubelet[3203]: E0124 00:00:49.430892 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js7r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f88658b6c-q6dt5_calico-apiserver(251b4c3c-e8df-4086-8bfb-8297ee672eec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:49.432262 kubelet[3203]: E0124 00:00:49.432232 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 24 00:00:53.270876 kubelet[3203]: E0124 00:00:53.270199 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 24 00:00:53.755205 systemd[1]: Started sshd@23-10.200.20.27:22-10.200.16.10:33750.service - OpenSSH per-connection server daemon (10.200.16.10:33750). Jan 24 00:00:54.239728 sshd[6334]: Accepted publickey for core from 10.200.16.10 port 33750 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:54.241428 sshd[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:54.248126 systemd-logind[1715]: New session 26 of user core. Jan 24 00:00:54.251082 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:00:54.272775 kubelet[3203]: E0124 00:00:54.271270 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 24 00:00:54.662260 sshd[6334]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:54.666831 systemd[1]: sshd@23-10.200.20.27:22-10.200.16.10:33750.service: Deactivated successfully. Jan 24 00:00:54.669935 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:00:54.672638 systemd-logind[1715]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:00:54.674017 systemd-logind[1715]: Removed session 26. Jan 24 00:00:57.268685 kubelet[3203]: E0124 00:00:57.268619 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 24 00:00:58.271623 containerd[1736]: time="2026-01-24T00:00:58.271574321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:00:59.756050 systemd[1]: Started sshd@24-10.200.20.27:22-10.200.16.10:46816.service - OpenSSH per-connection server daemon (10.200.16.10:46816). Jan 24 00:00:59.798120 containerd[1736]: time="2026-01-24T00:00:59.798074183Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:00:59.800777 containerd[1736]: time="2026-01-24T00:00:59.800737623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:00:59.800777 containerd[1736]: time="2026-01-24T00:00:59.800812543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:00:59.800978 kubelet[3203]: E0124 00:00:59.800921 3203 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:00:59.801187 kubelet[3203]: E0124 00:00:59.800985 3203 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:00:59.801187 kubelet[3203]: E0124 00:00:59.801104 3203 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nbqht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-27fdn_calico-system(693475f7-1f52-409e-89ad-83367b27d7ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:00:59.802486 kubelet[3203]: E0124 00:00:59.802410 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 24 00:01:00.241708 sshd[6369]: Accepted publickey for core from 10.200.16.10 port 46816 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:00.243410 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:00.247694 systemd-logind[1715]: New session 27 of user core. Jan 24 00:01:00.254135 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 00:01:00.270479 kubelet[3203]: E0124 00:01:00.269895 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 24 00:01:00.661197 sshd[6369]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:00.666145 systemd-logind[1715]: Session 27 logged out. Waiting for processes to exit. Jan 24 00:01:00.666359 systemd[1]: sshd@24-10.200.20.27:22-10.200.16.10:46816.service: Deactivated successfully. Jan 24 00:01:00.668846 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 00:01:00.671321 systemd-logind[1715]: Removed session 27. Jan 24 00:01:03.268594 kubelet[3203]: E0124 00:01:03.268323 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-q6dt5" podUID="251b4c3c-e8df-4086-8bfb-8297ee672eec" Jan 24 00:01:03.271388 kubelet[3203]: E0124 00:01:03.271007 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2" Jan 24 00:01:05.744316 systemd[1]: Started sshd@25-10.200.20.27:22-10.200.16.10:46828.service - OpenSSH per-connection server daemon (10.200.16.10:46828). Jan 24 00:01:06.191390 sshd[6382]: Accepted publickey for core from 10.200.16.10 port 46828 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:06.192570 sshd[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:06.202132 systemd-logind[1715]: New session 28 of user core. Jan 24 00:01:06.207286 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 00:01:06.272409 kubelet[3203]: E0124 00:01:06.272366 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787b66fb85-crtpt" podUID="8ee41f25-89f1-4519-b99e-33fdb651ce3d" Jan 24 00:01:06.577041 sshd[6382]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:06.580391 systemd[1]: sshd@25-10.200.20.27:22-10.200.16.10:46828.service: Deactivated successfully. Jan 24 00:01:06.583371 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 00:01:06.584252 systemd-logind[1715]: Session 28 logged out. Waiting for processes to exit. Jan 24 00:01:06.585109 systemd-logind[1715]: Removed session 28. Jan 24 00:01:08.272016 kubelet[3203]: E0124 00:01:08.271401 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674d7cd84f-5hq44" podUID="e0b5e5a7-1acb-4d63-8673-57e3c939b318" Jan 24 00:01:08.272555 kubelet[3203]: E0124 00:01:08.272524 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-phrmd" podUID="89876e47-5c25-4ed8-975b-aadadd46d2c9" Jan 24 00:01:11.672178 systemd[1]: Started sshd@26-10.200.20.27:22-10.200.16.10:39564.service - OpenSSH per-connection server daemon (10.200.16.10:39564). Jan 24 00:01:12.125787 sshd[6399]: Accepted publickey for core from 10.200.16.10 port 39564 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:01:12.127213 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:01:12.133020 systemd-logind[1715]: New session 29 of user core. Jan 24 00:01:12.138120 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 00:01:12.270335 kubelet[3203]: E0124 00:01:12.270292 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27fdn" podUID="693475f7-1f52-409e-89ad-83367b27d7ef" Jan 24 00:01:12.527186 sshd[6399]: pam_unix(sshd:session): session closed for user core Jan 24 00:01:12.533021 systemd-logind[1715]: Session 29 logged out. Waiting for processes to exit. Jan 24 00:01:12.533645 systemd[1]: sshd@26-10.200.20.27:22-10.200.16.10:39564.service: Deactivated successfully. Jan 24 00:01:12.537530 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 00:01:12.539971 systemd-logind[1715]: Removed session 29. Jan 24 00:01:13.267606 kubelet[3203]: E0124 00:01:13.267548 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f88658b6c-p27j5" podUID="849bc66d-ccf9-400e-bccb-fea5f90abeb0" Jan 24 00:01:14.269784 kubelet[3203]: E0124 00:01:14.269730 3203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6977ffbc55-s4jdp" podUID="a31be8f9-573e-4955-99b0-981cca2e99b2"