Jan 23 23:56:05.178483 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:56:05.178504 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:56:05.178512 kernel: KASLR enabled Jan 23 23:56:05.178518 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:56:05.178525 kernel: printk: bootconsole [pl11] enabled Jan 23 23:56:05.178530 kernel: efi: EFI v2.7 by EDK II Jan 23 23:56:05.178538 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:56:05.178544 kernel: random: crng init done Jan 23 23:56:05.178550 kernel: ACPI: Early table checksum verification disabled Jan 23 23:56:05.178556 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:56:05.178562 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178568 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178575 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:56:05.178582 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178589 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178595 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178609 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178617 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178623 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178630 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:56:05.178636 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178642 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:56:05.178649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:56:05.178655 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:56:05.178662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:56:05.178668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:56:05.178674 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:56:05.178681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:56:05.178688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:56:05.178695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:56:05.178701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:56:05.178708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:56:05.178714 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:56:05.178720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:56:05.178726 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 23 23:56:05.178733 kernel: Zone ranges: Jan 23 23:56:05.178739 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:56:05.178745 kernel: DMA32 empty Jan 23 23:56:05.178751 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:56:05.178758 kernel: Movable zone start for each node Jan 23 23:56:05.178768 kernel: Early memory node ranges Jan 23 23:56:05.178775 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:56:05.178782 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:56:05.178789 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:56:05.178796 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:56:05.178804 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:56:05.178810 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:56:05.178817 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:56:05.178824 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:56:05.178831 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:56:05.178838 kernel: psci: probing for conduit method from ACPI. Jan 23 23:56:05.178844 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:56:05.178851 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:56:05.178858 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:56:05.178864 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:56:05.178871 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:56:05.178878 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:56:05.178886 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:56:05.178892 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:56:05.178899 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:56:05.178906 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:56:05.178913 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:56:05.178920 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:56:05.178926 kernel: CPU features: detected: Spectre-BHB Jan 23 23:56:05.178933 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:56:05.178940 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:56:05.178947 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:56:05.178953 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:56:05.178961 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:56:05.178968 kernel: alternatives: applying boot alternatives Jan 23 23:56:05.178976 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:56:05.178983 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:56:05.178990 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:56:05.178997 kernel: Fallback order for Node 0: 0 Jan 23 23:56:05.179004 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:56:05.179010 kernel: Policy zone: Normal Jan 23 23:56:05.179017 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:56:05.179024 kernel: software IO TLB: area num 2. Jan 23 23:56:05.179031 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:56:05.179040 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 23 23:56:05.179047 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:56:05.179053 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:56:05.179060 kernel: rcu: RCU event tracing is enabled. Jan 23 23:56:05.179067 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:56:05.179075 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:56:05.179081 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:56:05.179088 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:56:05.179095 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:56:05.179102 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:56:05.179108 kernel: GICv3: 960 SPIs implemented Jan 23 23:56:05.179116 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:56:05.179123 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:56:05.179130 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:56:05.179136 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:56:05.179143 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:56:05.179150 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:56:05.179157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:56:05.179163 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:56:05.179171 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:56:05.179177 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:56:05.179184 kernel: Console: colour dummy device 80x25 Jan 23 23:56:05.179193 kernel: printk: console [tty1] enabled Jan 23 23:56:05.179200 kernel: ACPI: Core revision 20230628 Jan 23 23:56:05.179207 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:56:05.179214 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:56:05.179221 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:56:05.179228 kernel: landlock: Up and running. Jan 23 23:56:05.179235 kernel: SELinux: Initializing. Jan 23 23:56:05.179242 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.179249 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.179260 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:56:05.179267 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:56:05.179274 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:56:05.179281 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:56:05.179288 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:56:05.179295 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:56:05.179302 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:56:05.179309 kernel: Remapping and enabling EFI services. Jan 23 23:56:05.179322 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:56:05.179329 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:56:05.179336 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:56:05.179344 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:56:05.179352 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:56:05.179360 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:56:05.179367 kernel: SMP: Total of 2 processors activated. Jan 23 23:56:05.179374 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:56:05.179382 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:56:05.179391 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:56:05.179398 kernel: CPU features: detected: CRC32 instructions Jan 23 23:56:05.179405 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:56:05.179413 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:56:05.179420 kernel: CPU features: detected: Privileged Access Never Jan 23 23:56:05.179427 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:56:05.179435 kernel: alternatives: applying system-wide alternatives Jan 23 23:56:05.179442 kernel: devtmpfs: initialized Jan 23 23:56:05.179449 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:56:05.179458 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:56:05.179465 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:56:05.179472 kernel: SMBIOS 3.1.0 present. Jan 23 23:56:05.179480 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:56:05.179487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:56:05.179495 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:56:05.179502 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:56:05.179509 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:56:05.179517 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:56:05.179526 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:56:05.179533 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:56:05.179540 kernel: cpuidle: using governor menu Jan 23 23:56:05.179548 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:56:05.179555 kernel: ASID allocator initialised with 32768 entries Jan 23 23:56:05.179562 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:56:05.179570 kernel: Serial: AMBA PL011 UART driver Jan 23 23:56:05.179577 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:56:05.179585 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:56:05.179593 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:56:05.179604 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:56:05.179611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:56:05.179619 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:56:05.179626 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:56:05.179633 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:56:05.179641 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:56:05.179648 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:56:05.179655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:56:05.179664 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:56:05.179671 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:56:05.179679 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:56:05.179686 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:56:05.179693 kernel: ACPI: Interpreter enabled Jan 23 23:56:05.179700 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:56:05.179708 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:56:05.179715 kernel: printk: console [ttyAMA0] enabled Jan 23 23:56:05.179722 kernel: printk: bootconsole [pl11] disabled Jan 23 23:56:05.179731 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:56:05.179738 kernel: iommu: Default domain type: Translated Jan 23 23:56:05.179746 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:56:05.179753 kernel: efivars: Registered efivars operations Jan 23 23:56:05.179760 kernel: vgaarb: loaded Jan 23 23:56:05.179767 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:56:05.179775 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:56:05.179782 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:56:05.179789 kernel: pnp: PnP ACPI init Jan 23 23:56:05.179798 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:56:05.179805 kernel: NET: Registered PF_INET protocol family Jan 23 23:56:05.179812 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:56:05.179820 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:56:05.179827 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:56:05.179835 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:56:05.179842 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:56:05.179850 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:56:05.179857 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.179866 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.179873 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:56:05.179880 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:56:05.179888 kernel: kvm [1]: HYP mode not available Jan 23 23:56:05.179895 kernel: Initialise system trusted keyrings Jan 23 23:56:05.179903 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:56:05.179910 kernel: Key type asymmetric registered Jan 23 23:56:05.179917 kernel: Asymmetric key parser 'x509' registered Jan 23 23:56:05.179924 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:56:05.179933 kernel: io scheduler mq-deadline registered Jan 23 23:56:05.179940 kernel: io scheduler kyber registered Jan 23 23:56:05.179947 kernel: io scheduler bfq registered Jan 23 23:56:05.179955 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:56:05.179962 kernel: thunder_xcv, ver 1.0 Jan 23 23:56:05.179969 kernel: thunder_bgx, ver 1.0 Jan 23 23:56:05.179976 kernel: nicpf, ver 1.0 Jan 23 23:56:05.179983 kernel: nicvf, ver 1.0 Jan 23 23:56:05.180109 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:56:05.180187 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:56:04 UTC (1769212564) Jan 23 23:56:05.180197 kernel: efifb: probing for efifb Jan 23 23:56:05.180205 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:56:05.180212 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:56:05.180219 kernel: efifb: scrolling: redraw Jan 23 23:56:05.180226 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:56:05.180234 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:56:05.180241 kernel: fb0: EFI VGA frame buffer device Jan 23 23:56:05.180250 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:56:05.180258 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:56:05.180265 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:56:05.180273 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:56:05.180280 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:56:05.180287 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:56:05.180295 kernel: Segment Routing with IPv6 Jan 23 23:56:05.180302 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:56:05.180310 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:56:05.180319 kernel: Key type dns_resolver registered Jan 23 23:56:05.180326 kernel: registered taskstats version 1 Jan 23 23:56:05.180334 kernel: Loading compiled-in X.509 certificates Jan 23 23:56:05.180341 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:56:05.180348 kernel: Key type .fscrypt registered Jan 23 23:56:05.180356 kernel: Key type fscrypt-provisioning registered Jan 23 23:56:05.180363 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:56:05.180371 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:56:05.180379 kernel: ima: No architecture policies found Jan 23 23:56:05.180387 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:56:05.180395 kernel: clk: Disabling unused clocks Jan 23 23:56:05.180403 kernel: Freeing unused kernel memory: 39424K Jan 23 23:56:05.180410 kernel: Run /init as init process Jan 23 23:56:05.180418 kernel: with arguments: Jan 23 23:56:05.180425 kernel: /init Jan 23 23:56:05.180432 kernel: with environment: Jan 23 23:56:05.180439 kernel: HOME=/ Jan 23 23:56:05.180446 kernel: TERM=linux Jan 23 23:56:05.180455 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:56:05.180467 systemd[1]: Detected virtualization microsoft. Jan 23 23:56:05.180475 systemd[1]: Detected architecture arm64. Jan 23 23:56:05.180483 systemd[1]: Running in initrd. Jan 23 23:56:05.180490 systemd[1]: No hostname configured, using default hostname. Jan 23 23:56:05.180497 systemd[1]: Hostname set to . Jan 23 23:56:05.180506 systemd[1]: Initializing machine ID from random generator. Jan 23 23:56:05.180515 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:56:05.180523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:05.180531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:05.180539 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:56:05.180548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:56:05.180556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:56:05.180564 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:56:05.180573 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:56:05.180583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:56:05.180591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:05.180603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:05.180612 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:56:05.180620 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:56:05.180628 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:56:05.180635 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:56:05.180643 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:56:05.180653 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:56:05.180661 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:56:05.180669 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:56:05.180677 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:05.180685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:05.180693 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:05.180701 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:56:05.180709 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:56:05.180718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:56:05.180726 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:56:05.180734 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:56:05.180742 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:56:05.180765 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:56:05.180785 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:56:05.180793 systemd-journald[217]: Journal started Jan 23 23:56:05.180812 systemd-journald[217]: Runtime Journal (/run/log/journal/1d4f9157fc8c429d8a9292971973bc3a) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:56:05.193086 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:56:05.202338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:05.212042 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:56:05.213220 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:56:05.222657 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:05.242005 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:56:05.244797 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:56:05.248631 kernel: Bridge firewalling registered Jan 23 23:56:05.247623 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:56:05.252359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:05.257351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:05.278982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:05.286801 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:56:05.296497 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:56:05.309597 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:56:05.324266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:56:05.348794 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:05.363219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:05.368110 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:05.386090 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:56:05.396808 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:56:05.411350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:56:05.426337 dracut-cmdline[254]: dracut-dracut-053 Jan 23 23:56:05.426337 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:56:05.464705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:05.465529 systemd-resolved[258]: Positive Trust Anchors: Jan 23 23:56:05.465539 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:56:05.465571 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:56:05.467913 systemd-resolved[258]: Defaulting to hostname 'linux'. Jan 23 23:56:05.476891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:56:05.482731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:05.587618 kernel: SCSI subsystem initialized Jan 23 23:56:05.594612 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:56:05.604622 kernel: iscsi: registered transport (tcp) Jan 23 23:56:05.621024 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:56:05.621055 kernel: QLogic iSCSI HBA Driver Jan 23 23:56:05.653497 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:56:05.663932 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:56:05.694483 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:56:05.694549 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:56:05.699346 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:56:05.748617 kernel: raid6: neonx8 gen() 15810 MB/s Jan 23 23:56:05.765607 kernel: raid6: neonx4 gen() 15691 MB/s Jan 23 23:56:05.784606 kernel: raid6: neonx2 gen() 13318 MB/s Jan 23 23:56:05.804606 kernel: raid6: neonx1 gen() 10494 MB/s Jan 23 23:56:05.823608 kernel: raid6: int64x8 gen() 6979 MB/s Jan 23 23:56:05.842606 kernel: raid6: int64x4 gen() 7349 MB/s Jan 23 23:56:05.862606 kernel: raid6: int64x2 gen() 6146 MB/s Jan 23 23:56:05.884636 kernel: raid6: int64x1 gen() 5063 MB/s Jan 23 23:56:05.884647 kernel: raid6: using algorithm neonx8 gen() 15810 MB/s Jan 23 23:56:05.907527 kernel: raid6: .... xor() 12053 MB/s, rmw enabled Jan 23 23:56:05.907537 kernel: raid6: using neon recovery algorithm Jan 23 23:56:05.918603 kernel: xor: measuring software checksum speed Jan 23 23:56:05.918618 kernel: 8regs : 19750 MB/sec Jan 23 23:56:05.921499 kernel: 32regs : 19636 MB/sec Jan 23 23:56:05.924233 kernel: arm64_neon : 27061 MB/sec Jan 23 23:56:05.927370 kernel: xor: using function: arm64_neon (27061 MB/sec) Jan 23 23:56:05.977631 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:56:05.986631 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:56:06.010751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:06.029885 systemd-udevd[441]: Using default interface naming scheme 'v255'. Jan 23 23:56:06.034197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:06.056714 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:56:06.071650 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Jan 23 23:56:06.103644 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:56:06.119050 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:56:06.157022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:06.173416 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:56:06.195207 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:56:06.210477 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:56:06.221998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:06.238282 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:56:06.262833 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:56:06.279953 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:56:06.280108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:06.299937 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:56:06.301754 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:06.333663 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:56:06.333688 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 23:56:06.333700 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:56:06.316796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:06.362085 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:56:06.362106 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 23:56:06.362116 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:56:06.362265 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:56:06.362276 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:56:06.317012 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:06.379080 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:56:06.379101 kernel: scsi host0: storvsc_host_t Jan 23 23:56:06.337672 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:06.404908 kernel: scsi host1: storvsc_host_t Jan 23 23:56:06.405260 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:56:06.406058 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:56:06.378966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:06.393943 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:56:06.436221 kernel: PTP clock support registered Jan 23 23:56:06.436250 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:56:06.418027 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:06.418127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:06.455591 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:56:06.454979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:06.481292 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:56:06.481315 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:56:06.481325 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:56:06.481540 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:56:06.481554 kernel: hv_netvsc 000d3afd-dab2-000d-3afd-dab2000d3afd eth0: VF slot 1 added Jan 23 23:56:06.065878 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:56:06.082900 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:56:06.082923 systemd-journald[217]: Time jumped backwards, rotating. Jan 23 23:56:06.065625 systemd-resolved[258]: Clock change detected. Flushing caches. Jan 23 23:56:06.088566 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:06.122468 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:56:06.122491 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:56:06.122661 kernel: hv_pci 1a146cfb-a834-486e-80f8-7edb7c9a5440: PCI VMBus probing: Using version 0x10004 Jan 23 23:56:06.122772 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:56:06.119555 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:06.152931 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:56:06.157994 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:56:06.158092 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:56:06.158184 kernel: hv_pci 1a146cfb-a834-486e-80f8-7edb7c9a5440: PCI host bridge to bus a834:00 Jan 23 23:56:06.158282 kernel: pci_bus a834:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:56:06.158386 kernel: pci_bus a834:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:56:06.158485 kernel: pci a834:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:56:06.169858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#51 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:56:06.175440 kernel: pci a834:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:56:06.180460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:06.180500 kernel: pci a834:00:02.0: enabling Extended Tags Jan 23 23:56:06.186519 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:56:06.204583 kernel: pci a834:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a834:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:56:06.218184 kernel: pci_bus a834:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:56:06.218394 kernel: pci a834:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:56:06.221714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:06.240460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#68 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:56:06.277465 kernel: mlx5_core a834:00:02.0: enabling device (0000 -> 0002) Jan 23 23:56:06.284043 kernel: mlx5_core a834:00:02.0: firmware version: 16.30.5026 Jan 23 23:56:06.485148 kernel: hv_netvsc 000d3afd-dab2-000d-3afd-dab2000d3afd eth0: VF registering: eth1 Jan 23 23:56:06.485351 kernel: mlx5_core a834:00:02.0 eth1: joined to eth0 Jan 23 23:56:06.490793 kernel: mlx5_core a834:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:56:06.500424 kernel: mlx5_core a834:00:02.0 enP43060s1: renamed from eth1 Jan 23 23:56:06.722016 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:56:06.743431 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (485) Jan 23 23:56:06.760128 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:56:06.775481 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:56:06.797424 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (504) Jan 23 23:56:06.810518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:56:06.816079 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:56:06.839639 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:56:06.859425 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:06.866427 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:06.876312 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:07.876504 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:07.876616 disk-uuid[609]: The operation has completed successfully. Jan 23 23:56:07.943568 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:56:07.945439 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:56:07.974577 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:56:07.995227 sh[722]: Success Jan 23 23:56:08.031455 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:56:08.325706 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:56:08.346539 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:56:08.354386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:56:08.385653 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:56:08.385698 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:08.391062 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:56:08.394974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:56:08.398286 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:56:08.766403 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:56:08.774261 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:56:08.789666 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:56:08.796017 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:56:08.830561 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:08.830617 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:08.834139 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:56:08.870452 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:56:08.886098 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:56:08.890494 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:08.893344 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:56:08.914674 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:56:08.919639 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:56:08.941565 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:56:08.953475 systemd-networkd[904]: lo: Link UP Jan 23 23:56:08.953483 systemd-networkd[904]: lo: Gained carrier Jan 23 23:56:08.955106 systemd-networkd[904]: Enumeration completed Jan 23 23:56:08.955256 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:56:08.956762 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:08.956765 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:56:08.964981 systemd[1]: Reached target network.target - Network. Jan 23 23:56:09.039538 kernel: mlx5_core a834:00:02.0 enP43060s1: Link up Jan 23 23:56:09.079603 kernel: hv_netvsc 000d3afd-dab2-000d-3afd-dab2000d3afd eth0: Data path switched to VF: enP43060s1 Jan 23 23:56:09.079268 systemd-networkd[904]: enP43060s1: Link UP Jan 23 23:56:09.079357 systemd-networkd[904]: eth0: Link UP Jan 23 23:56:09.079481 systemd-networkd[904]: eth0: Gained carrier Jan 23 23:56:09.079490 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:09.099451 systemd-networkd[904]: enP43060s1: Gained carrier Jan 23 23:56:09.109447 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:56:09.942900 ignition[906]: Ignition 2.19.0 Jan 23 23:56:09.942911 ignition[906]: Stage: fetch-offline Jan 23 23:56:09.945832 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:56:09.942952 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:09.942960 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:09.943058 ignition[906]: parsed url from cmdline: "" Jan 23 23:56:09.965690 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:56:09.943061 ignition[906]: no config URL provided Jan 23 23:56:09.943066 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:56:09.943073 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:56:09.943077 ignition[906]: failed to fetch config: resource requires networking Jan 23 23:56:09.943585 ignition[906]: Ignition finished successfully Jan 23 23:56:09.986936 ignition[922]: Ignition 2.19.0 Jan 23 23:56:09.986942 ignition[922]: Stage: fetch Jan 23 23:56:09.987164 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:09.987178 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:09.987286 ignition[922]: parsed url from cmdline: "" Jan 23 23:56:09.987290 ignition[922]: no config URL provided Jan 23 23:56:09.987295 ignition[922]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:56:09.987303 ignition[922]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:56:09.987328 ignition[922]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:56:10.108851 ignition[922]: GET result: OK Jan 23 23:56:10.111223 ignition[922]: config has been read from IMDS userdata Jan 23 23:56:10.111266 ignition[922]: parsing config with SHA512: f88abf7f675d415b4db1b54d369d8c22aaf392d53a3fe948a97a9e5cc3c7b945b61f87af45af157ae2f2ba7b23ba3031ccef447d2140ff81ae48e0c412329d94 Jan 23 23:56:10.115295 unknown[922]: fetched base config from "system" Jan 23 23:56:10.115692 ignition[922]: fetch: fetch complete Jan 23 23:56:10.115302 unknown[922]: fetched base config from "system" Jan 23 23:56:10.115696 ignition[922]: fetch: fetch passed Jan 23 23:56:10.115306 unknown[922]: fetched user config from "azure" Jan 23 23:56:10.115742 ignition[922]: Ignition finished successfully Jan 23 23:56:10.119281 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:56:10.134639 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:56:10.156331 ignition[928]: Ignition 2.19.0 Jan 23 23:56:10.156339 ignition[928]: Stage: kargs Jan 23 23:56:10.160774 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:56:10.156525 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:10.156534 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:10.157625 ignition[928]: kargs: kargs passed Jan 23 23:56:10.157675 ignition[928]: Ignition finished successfully Jan 23 23:56:10.180654 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:56:10.200731 ignition[934]: Ignition 2.19.0 Jan 23 23:56:10.200741 ignition[934]: Stage: disks Jan 23 23:56:10.205071 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:56:10.200913 ignition[934]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:10.212273 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:56:10.200923 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:10.221547 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:56:10.201820 ignition[934]: disks: disks passed Jan 23 23:56:10.230467 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:56:10.201873 ignition[934]: Ignition finished successfully Jan 23 23:56:10.239013 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:56:10.247745 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:56:10.266675 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:56:10.358706 systemd-fsck[943]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:56:10.366588 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:56:10.381610 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:56:10.431446 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:56:10.431054 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:56:10.435332 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:56:10.478490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:56:10.502644 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Jan 23 23:56:10.502678 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:10.507657 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:10.511537 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:56:10.523344 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:56:10.519629 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:56:10.527976 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:56:10.538974 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:56:10.539492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:56:10.555697 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:56:10.563333 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:56:10.576680 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:56:10.954525 systemd-networkd[904]: eth0: Gained IPv6LL Jan 23 23:56:11.028243 coreos-metadata[971]: Jan 23 23:56:11.028 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:56:11.036444 coreos-metadata[971]: Jan 23 23:56:11.036 INFO Fetch successful Jan 23 23:56:11.040718 coreos-metadata[971]: Jan 23 23:56:11.040 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:56:11.060712 coreos-metadata[971]: Jan 23 23:56:11.060 INFO Fetch successful Jan 23 23:56:11.077470 coreos-metadata[971]: Jan 23 23:56:11.077 INFO wrote hostname ci-4081.3.6-n-31deed6810 to /sysroot/etc/hostname Jan 23 23:56:11.085043 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:56:11.295766 initrd-setup-root[983]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:56:11.319464 initrd-setup-root[990]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:56:11.342287 initrd-setup-root[997]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:56:11.347968 initrd-setup-root[1004]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:56:12.237383 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:56:12.252613 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:56:12.269755 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:56:12.276701 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:12.278782 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:56:12.309176 ignition[1071]: INFO : Ignition 2.19.0 Jan 23 23:56:12.309176 ignition[1071]: INFO : Stage: mount Jan 23 23:56:12.309176 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:12.309176 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:12.304728 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:56:12.333981 ignition[1071]: INFO : mount: mount passed Jan 23 23:56:12.333981 ignition[1071]: INFO : Ignition finished successfully Jan 23 23:56:12.314280 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:56:12.341629 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:56:12.355949 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:56:12.380427 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1083) Jan 23 23:56:12.398680 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:12.398731 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:12.401891 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:56:12.408419 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:56:12.410070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:56:12.440116 ignition[1100]: INFO : Ignition 2.19.0 Jan 23 23:56:12.444168 ignition[1100]: INFO : Stage: files Jan 23 23:56:12.444168 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:12.451440 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:12.451440 ignition[1100]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:56:12.461985 ignition[1100]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:56:12.461985 ignition[1100]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:56:12.558813 ignition[1100]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:56:12.564782 ignition[1100]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:56:12.564782 ignition[1100]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:56:12.559832 unknown[1100]: wrote ssh authorized keys file for user: core Jan 23 23:56:12.585492 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:56:12.594068 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:56:12.650365 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 23:56:13.509824 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:56:13.785166 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:56:13.785166 ignition[1100]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: files passed Jan 23 23:56:13.801339 ignition[1100]: INFO : Ignition finished successfully Jan 23 23:56:13.796701 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:56:13.833866 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:56:13.841606 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:56:13.853294 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:56:13.909850 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:13.909850 initrd-setup-root-after-ignition[1127]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:13.853450 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:56:13.935564 initrd-setup-root-after-ignition[1131]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:13.881907 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:56:13.890781 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:56:13.916683 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:56:13.961284 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:56:13.961402 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:56:13.971487 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:56:13.980932 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:56:13.989495 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:56:14.004797 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:56:14.019912 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:56:14.033860 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:56:14.048881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:14.054717 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:14.064906 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:56:14.074056 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:56:14.074177 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:56:14.086816 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:56:14.091387 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:56:14.100276 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:56:14.109357 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:56:14.118360 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:56:14.127976 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:56:14.137252 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:56:14.146887 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:56:14.155742 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:56:14.165485 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:56:14.173237 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:56:14.173359 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:56:14.185237 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:14.193749 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:14.203260 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:56:14.207392 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:14.213126 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:56:14.213265 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:56:14.226887 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:56:14.227012 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:56:14.232470 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:56:14.232561 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:56:14.242640 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:56:14.242732 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:56:14.267693 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:56:14.274498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:56:14.274655 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:14.320635 ignition[1152]: INFO : Ignition 2.19.0 Jan 23 23:56:14.320635 ignition[1152]: INFO : Stage: umount Jan 23 23:56:14.320635 ignition[1152]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:14.320635 ignition[1152]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:14.320635 ignition[1152]: INFO : umount: umount passed Jan 23 23:56:14.320635 ignition[1152]: INFO : Ignition finished successfully Jan 23 23:56:14.305474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:56:14.315035 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:56:14.315207 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:14.320854 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:56:14.320955 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:56:14.328587 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:56:14.329212 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:56:14.330455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:56:14.335938 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:56:14.336203 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:56:14.344836 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:56:14.344906 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:56:14.350286 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:56:14.350335 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:56:14.362011 systemd[1]: Stopped target network.target - Network. Jan 23 23:56:14.372986 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:56:14.373054 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:56:14.381664 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:56:14.390705 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:56:14.394331 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:14.399866 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:56:14.408029 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:56:14.418396 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:56:14.418457 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:56:14.426612 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:56:14.426658 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:56:14.437581 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:56:14.437641 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:56:14.445623 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:56:14.445663 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:56:14.454334 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:56:14.463752 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:56:14.472360 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:56:14.474439 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:56:14.670364 kernel: hv_netvsc 000d3afd-dab2-000d-3afd-dab2000d3afd eth0: Data path switched from VF: enP43060s1 Jan 23 23:56:14.475819 systemd-networkd[904]: eth0: DHCPv6 lease lost Jan 23 23:56:14.482846 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:56:14.482970 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:56:14.493242 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:56:14.494434 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:56:14.504739 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:56:14.504791 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:14.521714 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:56:14.528763 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:56:14.528838 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:56:14.539033 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:56:14.539085 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:14.547076 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:56:14.547116 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:14.557105 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:56:14.557147 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:14.569138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:14.598940 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:56:14.599132 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:14.605610 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:56:14.605660 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:14.613600 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:56:14.613629 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:14.622377 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:56:14.622432 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:56:14.636065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:56:14.636148 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:56:14.655926 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:56:14.655990 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:14.685544 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:56:14.694993 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:56:14.695060 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:14.704774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:14.704816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:14.720553 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:56:14.720648 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:56:14.729543 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:56:14.729635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:56:14.739889 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:56:14.739993 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:56:14.754744 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:56:14.754896 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:56:14.763567 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:56:14.781643 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:56:14.960664 systemd[1]: Switching root. Jan 23 23:56:15.049016 systemd-journald[217]: Journal stopped Jan 23 23:56:05.178483 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:56:05.178504 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:56:05.178512 kernel: KASLR enabled Jan 23 23:56:05.178518 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:56:05.178525 kernel: printk: bootconsole [pl11] enabled Jan 23 23:56:05.178530 kernel: efi: EFI v2.7 by EDK II Jan 23 23:56:05.178538 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:56:05.178544 kernel: random: crng init done Jan 23 23:56:05.178550 kernel: ACPI: Early table checksum verification disabled Jan 23 23:56:05.178556 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:56:05.178562 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178568 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178575 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:56:05.178582 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178589 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178595 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178609 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178617 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178623 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178630 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:56:05.178636 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:56:05.178642 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:56:05.178649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:56:05.178655 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:56:05.178662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:56:05.178668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:56:05.178674 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:56:05.178681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:56:05.178688 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:56:05.178695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:56:05.178701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:56:05.178708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:56:05.178714 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:56:05.178720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:56:05.178726 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 23 23:56:05.178733 kernel: Zone ranges: Jan 23 23:56:05.178739 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:56:05.178745 kernel: DMA32 empty Jan 23 23:56:05.178751 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:56:05.178758 kernel: Movable zone start for each node Jan 23 23:56:05.178768 kernel: Early memory node ranges Jan 23 23:56:05.178775 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:56:05.178782 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:56:05.178789 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:56:05.178796 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:56:05.178804 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:56:05.178810 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:56:05.178817 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:56:05.178824 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:56:05.178831 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:56:05.178838 kernel: psci: probing for conduit method from ACPI. Jan 23 23:56:05.178844 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:56:05.178851 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:56:05.178858 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:56:05.178864 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:56:05.178871 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:56:05.178878 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:56:05.178886 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:56:05.178892 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:56:05.178899 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:56:05.178906 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:56:05.178913 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:56:05.178920 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:56:05.178926 kernel: CPU features: detected: Spectre-BHB Jan 23 23:56:05.178933 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:56:05.178940 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:56:05.178947 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:56:05.178953 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:56:05.178961 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:56:05.178968 kernel: alternatives: applying boot alternatives Jan 23 23:56:05.178976 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:56:05.178983 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:56:05.178990 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:56:05.178997 kernel: Fallback order for Node 0: 0 Jan 23 23:56:05.179004 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:56:05.179010 kernel: Policy zone: Normal Jan 23 23:56:05.179017 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:56:05.179024 kernel: software IO TLB: area num 2. Jan 23 23:56:05.179031 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:56:05.179040 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 23 23:56:05.179047 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:56:05.179053 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:56:05.179060 kernel: rcu: RCU event tracing is enabled. Jan 23 23:56:05.179067 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:56:05.179075 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:56:05.179081 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:56:05.179088 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:56:05.179095 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:56:05.179102 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:56:05.179108 kernel: GICv3: 960 SPIs implemented Jan 23 23:56:05.179116 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:56:05.179123 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:56:05.179130 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:56:05.179136 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:56:05.179143 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:56:05.179150 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:56:05.179157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:56:05.179163 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:56:05.179171 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:56:05.179177 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:56:05.179184 kernel: Console: colour dummy device 80x25 Jan 23 23:56:05.179193 kernel: printk: console [tty1] enabled Jan 23 23:56:05.179200 kernel: ACPI: Core revision 20230628 Jan 23 23:56:05.179207 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:56:05.179214 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:56:05.179221 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:56:05.179228 kernel: landlock: Up and running. Jan 23 23:56:05.179235 kernel: SELinux: Initializing. Jan 23 23:56:05.179242 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.179249 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.179260 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:56:05.179267 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:56:05.179274 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:56:05.179281 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:56:05.179288 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:56:05.179295 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:56:05.179302 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:56:05.179309 kernel: Remapping and enabling EFI services. Jan 23 23:56:05.179322 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:56:05.179329 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:56:05.179336 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:56:05.179344 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:56:05.179352 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:56:05.179360 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:56:05.179367 kernel: SMP: Total of 2 processors activated. Jan 23 23:56:05.179374 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:56:05.179382 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:56:05.179391 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:56:05.179398 kernel: CPU features: detected: CRC32 instructions Jan 23 23:56:05.179405 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:56:05.179413 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:56:05.179420 kernel: CPU features: detected: Privileged Access Never Jan 23 23:56:05.179427 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:56:05.179435 kernel: alternatives: applying system-wide alternatives Jan 23 23:56:05.179442 kernel: devtmpfs: initialized Jan 23 23:56:05.179449 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:56:05.179458 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:56:05.179465 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:56:05.179472 kernel: SMBIOS 3.1.0 present. Jan 23 23:56:05.179480 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:56:05.179487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:56:05.179495 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:56:05.179502 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:56:05.179509 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:56:05.179517 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:56:05.179526 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:56:05.179533 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:56:05.179540 kernel: cpuidle: using governor menu Jan 23 23:56:05.179548 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:56:05.179555 kernel: ASID allocator initialised with 32768 entries Jan 23 23:56:05.179562 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:56:05.179570 kernel: Serial: AMBA PL011 UART driver Jan 23 23:56:05.179577 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:56:05.179585 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:56:05.179593 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:56:05.179604 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:56:05.179611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:56:05.179619 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:56:05.179626 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:56:05.179633 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:56:05.179641 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:56:05.179648 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:56:05.179655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:56:05.179664 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:56:05.179671 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:56:05.179679 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:56:05.179686 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:56:05.179693 kernel: ACPI: Interpreter enabled Jan 23 23:56:05.179700 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:56:05.179708 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:56:05.179715 kernel: printk: console [ttyAMA0] enabled Jan 23 23:56:05.179722 kernel: printk: bootconsole [pl11] disabled Jan 23 23:56:05.179731 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:56:05.179738 kernel: iommu: Default domain type: Translated Jan 23 23:56:05.179746 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:56:05.179753 kernel: efivars: Registered efivars operations Jan 23 23:56:05.179760 kernel: vgaarb: loaded Jan 23 23:56:05.179767 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:56:05.179775 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:56:05.179782 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:56:05.179789 kernel: pnp: PnP ACPI init Jan 23 23:56:05.179798 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:56:05.179805 kernel: NET: Registered PF_INET protocol family Jan 23 23:56:05.179812 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:56:05.179820 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:56:05.179827 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:56:05.179835 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:56:05.179842 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:56:05.179850 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:56:05.179857 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.179866 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.179873 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:56:05.179880 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:56:05.179888 kernel: kvm [1]: HYP mode not available Jan 23 23:56:05.179895 kernel: Initialise system trusted keyrings Jan 23 23:56:05.179903 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:56:05.179910 kernel: Key type asymmetric registered Jan 23 23:56:05.179917 kernel: Asymmetric key parser 'x509' registered Jan 23 23:56:05.179924 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:56:05.179933 kernel: io scheduler mq-deadline registered Jan 23 23:56:05.179940 kernel: io scheduler kyber registered Jan 23 23:56:05.179947 kernel: io scheduler bfq registered Jan 23 23:56:05.179955 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:56:05.179962 kernel: thunder_xcv, ver 1.0 Jan 23 23:56:05.179969 kernel: thunder_bgx, ver 1.0 Jan 23 23:56:05.179976 kernel: nicpf, ver 1.0 Jan 23 23:56:05.179983 kernel: nicvf, ver 1.0 Jan 23 23:56:05.180109 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:56:05.180187 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:56:04 UTC (1769212564) Jan 23 23:56:05.180197 kernel: efifb: probing for efifb Jan 23 23:56:05.180205 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:56:05.180212 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:56:05.180219 kernel: efifb: scrolling: redraw Jan 23 23:56:05.180226 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:56:05.180234 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:56:05.180241 kernel: fb0: EFI VGA frame buffer device Jan 23 23:56:05.180250 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:56:05.180258 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:56:05.180265 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:56:05.180273 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:56:05.180280 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:56:05.180287 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:56:05.180295 kernel: Segment Routing with IPv6 Jan 23 23:56:05.180302 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:56:05.180310 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:56:05.180319 kernel: Key type dns_resolver registered Jan 23 23:56:05.180326 kernel: registered taskstats version 1 Jan 23 23:56:05.180334 kernel: Loading compiled-in X.509 certificates Jan 23 23:56:05.180341 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:56:05.180348 kernel: Key type .fscrypt registered Jan 23 23:56:05.180356 kernel: Key type fscrypt-provisioning registered Jan 23 23:56:05.180363 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:56:05.180371 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:56:05.180379 kernel: ima: No architecture policies found Jan 23 23:56:05.180387 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:56:05.180395 kernel: clk: Disabling unused clocks Jan 23 23:56:05.180403 kernel: Freeing unused kernel memory: 39424K Jan 23 23:56:05.180410 kernel: Run /init as init process Jan 23 23:56:05.180418 kernel: with arguments: Jan 23 23:56:05.180425 kernel: /init Jan 23 23:56:05.180432 kernel: with environment: Jan 23 23:56:05.180439 kernel: HOME=/ Jan 23 23:56:05.180446 kernel: TERM=linux Jan 23 23:56:05.180455 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:56:05.180467 systemd[1]: Detected virtualization microsoft. Jan 23 23:56:05.180475 systemd[1]: Detected architecture arm64. Jan 23 23:56:05.180483 systemd[1]: Running in initrd. Jan 23 23:56:05.180490 systemd[1]: No hostname configured, using default hostname. Jan 23 23:56:05.180497 systemd[1]: Hostname set to . Jan 23 23:56:05.180506 systemd[1]: Initializing machine ID from random generator. Jan 23 23:56:05.180515 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:56:05.180523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:05.180531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:05.180539 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:56:05.180548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:56:05.180556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:56:05.180564 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:56:05.180573 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:56:05.180583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:56:05.180591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:05.180603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:05.180612 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:56:05.180620 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:56:05.180628 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:56:05.180635 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:56:05.180643 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:56:05.180653 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:56:05.180661 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:56:05.180669 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:56:05.180677 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:05.180685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:05.180693 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:05.180701 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:56:05.180709 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:56:05.180718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:56:05.180726 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:56:05.180734 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:56:05.180742 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:56:05.180765 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:56:05.180785 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:56:05.180793 systemd-journald[217]: Journal started Jan 23 23:56:05.180812 systemd-journald[217]: Runtime Journal (/run/log/journal/1d4f9157fc8c429d8a9292971973bc3a) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:56:05.193086 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:56:05.202338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:05.212042 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:56:05.213220 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:56:05.222657 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:05.242005 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:56:05.244797 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:56:05.248631 kernel: Bridge firewalling registered Jan 23 23:56:05.247623 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:56:05.252359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:05.257351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:05.278982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:05.286801 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:56:05.296497 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:56:05.309597 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:56:05.324266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:56:05.348794 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:05.363219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:05.368110 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:05.386090 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:56:05.396808 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:56:05.411350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:56:05.426337 dracut-cmdline[254]: dracut-dracut-053 Jan 23 23:56:05.426337 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:56:05.464705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:05.465529 systemd-resolved[258]: Positive Trust Anchors: Jan 23 23:56:05.465539 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:56:05.465571 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:56:05.467913 systemd-resolved[258]: Defaulting to hostname 'linux'. Jan 23 23:56:05.476891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:56:05.482731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:05.587618 kernel: SCSI subsystem initialized Jan 23 23:56:05.594612 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:56:05.604622 kernel: iscsi: registered transport (tcp) Jan 23 23:56:05.621024 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:56:05.621055 kernel: QLogic iSCSI HBA Driver Jan 23 23:56:05.653497 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:56:05.663932 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:56:05.694483 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:56:05.694549 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:56:05.699346 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:56:05.748617 kernel: raid6: neonx8 gen() 15810 MB/s Jan 23 23:56:05.765607 kernel: raid6: neonx4 gen() 15691 MB/s Jan 23 23:56:05.784606 kernel: raid6: neonx2 gen() 13318 MB/s Jan 23 23:56:05.804606 kernel: raid6: neonx1 gen() 10494 MB/s Jan 23 23:56:05.823608 kernel: raid6: int64x8 gen() 6979 MB/s Jan 23 23:56:05.842606 kernel: raid6: int64x4 gen() 7349 MB/s Jan 23 23:56:05.862606 kernel: raid6: int64x2 gen() 6146 MB/s Jan 23 23:56:05.884636 kernel: raid6: int64x1 gen() 5063 MB/s Jan 23 23:56:05.884647 kernel: raid6: using algorithm neonx8 gen() 15810 MB/s Jan 23 23:56:05.907527 kernel: raid6: .... xor() 12053 MB/s, rmw enabled Jan 23 23:56:05.907537 kernel: raid6: using neon recovery algorithm Jan 23 23:56:05.918603 kernel: xor: measuring software checksum speed Jan 23 23:56:05.918618 kernel: 8regs : 19750 MB/sec Jan 23 23:56:05.921499 kernel: 32regs : 19636 MB/sec Jan 23 23:56:05.924233 kernel: arm64_neon : 27061 MB/sec Jan 23 23:56:05.927370 kernel: xor: using function: arm64_neon (27061 MB/sec) Jan 23 23:56:05.977631 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:56:05.986631 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:56:06.010751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:06.029885 systemd-udevd[441]: Using default interface naming scheme 'v255'. Jan 23 23:56:06.034197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:06.056714 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:56:06.071650 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Jan 23 23:56:06.103644 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:56:06.119050 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:56:06.157022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:06.173416 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:56:06.195207 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:56:06.210477 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:56:06.221998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:06.238282 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:56:06.262833 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:56:06.279953 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:56:06.280108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:06.299937 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:56:06.301754 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:06.333663 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:56:06.333688 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 23:56:06.333700 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:56:06.316796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:06.362085 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:56:06.362106 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 23:56:06.362116 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:56:06.362265 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:56:06.362276 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:56:06.317012 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:06.379080 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:56:06.379101 kernel: scsi host0: storvsc_host_t Jan 23 23:56:06.337672 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:06.404908 kernel: scsi host1: storvsc_host_t Jan 23 23:56:06.405260 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:56:06.406058 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:56:06.378966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:06.393943 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:56:06.436221 kernel: PTP clock support registered Jan 23 23:56:06.436250 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:56:06.418027 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:06.418127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:06.455591 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:56:06.454979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:06.481292 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:56:06.481315 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:56:06.481325 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:56:06.481540 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:56:06.481554 kernel: hv_netvsc 000d3afd-dab2-000d-3afd-dab2000d3afd eth0: VF slot 1 added Jan 23 23:56:06.065878 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:56:06.082900 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:56:06.082923 systemd-journald[217]: Time jumped backwards, rotating. Jan 23 23:56:06.065625 systemd-resolved[258]: Clock change detected. Flushing caches. Jan 23 23:56:06.088566 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:06.122468 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:56:06.122491 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:56:06.122661 kernel: hv_pci 1a146cfb-a834-486e-80f8-7edb7c9a5440: PCI VMBus probing: Using version 0x10004 Jan 23 23:56:06.122772 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:56:06.119555 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:06.152931 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:56:06.157994 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:56:06.158092 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:56:06.158184 kernel: hv_pci 1a146cfb-a834-486e-80f8-7edb7c9a5440: PCI host bridge to bus a834:00 Jan 23 23:56:06.158282 kernel: pci_bus a834:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:56:06.158386 kernel: pci_bus a834:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:56:06.158485 kernel: pci a834:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:56:06.169858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#51 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:56:06.175440 kernel: pci a834:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:56:06.180460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:06.180500 kernel: pci a834:00:02.0: enabling Extended Tags Jan 23 23:56:06.186519 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:56:06.204583 kernel: pci a834:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a834:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:56:06.218184 kernel: pci_bus a834:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:56:06.218394 kernel: pci a834:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:56:06.221714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:06.240460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#68 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:56:06.277465 kernel: mlx5_core a834:00:02.0: enabling device (0000 -> 0002) Jan 23 23:56:06.284043 kernel: mlx5_core a834:00:02.0: firmware version: 16.30.5026 Jan 23 23:56:06.485148 kernel: hv_netvsc 000d3afd-dab2-000d-3afd-dab2000d3afd eth0: VF registering: eth1 Jan 23 23:56:06.485351 kernel: mlx5_core a834:00:02.0 eth1: joined to eth0 Jan 23 23:56:06.490793 kernel: mlx5_core a834:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:56:06.500424 kernel: mlx5_core a834:00:02.0 enP43060s1: renamed from eth1 Jan 23 23:56:06.722016 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:56:06.743431 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (485) Jan 23 23:56:06.760128 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:56:06.775481 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:56:06.797424 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (504) Jan 23 23:56:06.810518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:56:06.816079 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:56:06.839639 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:56:06.859425 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:06.866427 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:06.876312 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:07.876504 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:56:07.876616 disk-uuid[609]: The operation has completed successfully. Jan 23 23:56:07.943568 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:56:07.945439 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:56:07.974577 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:56:07.995227 sh[722]: Success Jan 23 23:56:08.031455 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:56:08.325706 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:56:08.346539 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:56:08.354386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:56:08.385653 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:56:08.385698 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:08.391062 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:56:08.394974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:56:08.398286 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:56:08.766403 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:56:08.774261 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:56:08.789666 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:56:08.796017 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:56:08.830561 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:08.830617 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:08.834139 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:56:08.870452 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:56:08.886098 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:56:08.890494 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:08.893344 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:56:08.914674 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:56:08.919639 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:56:08.941565 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:56:08.953475 systemd-networkd[904]: lo: Link UP Jan 23 23:56:08.953483 systemd-networkd[904]: lo: Gained carrier Jan 23 23:56:08.955106 systemd-networkd[904]: Enumeration completed Jan 23 23:56:08.955256 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:56:08.956762 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:08.956765 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:56:08.964981 systemd[1]: Reached target network.target - Network. Jan 23 23:56:09.039538 kernel: mlx5_core a834:00:02.0 enP43060s1: Link up Jan 23 23:56:09.079603 kernel: hv_netvsc 000d3afd-dab2-000d-3afd-dab2000d3afd eth0: Data path switched to VF: enP43060s1 Jan 23 23:56:09.079268 systemd-networkd[904]: enP43060s1: Link UP Jan 23 23:56:09.079357 systemd-networkd[904]: eth0: Link UP Jan 23 23:56:09.079481 systemd-networkd[904]: eth0: Gained carrier Jan 23 23:56:09.079490 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:09.099451 systemd-networkd[904]: enP43060s1: Gained carrier Jan 23 23:56:09.109447 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:56:09.942900 ignition[906]: Ignition 2.19.0 Jan 23 23:56:09.942911 ignition[906]: Stage: fetch-offline Jan 23 23:56:09.945832 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:56:09.942952 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:09.942960 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:09.943058 ignition[906]: parsed url from cmdline: "" Jan 23 23:56:09.965690 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:56:09.943061 ignition[906]: no config URL provided Jan 23 23:56:09.943066 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:56:09.943073 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:56:09.943077 ignition[906]: failed to fetch config: resource requires networking Jan 23 23:56:09.943585 ignition[906]: Ignition finished successfully Jan 23 23:56:09.986936 ignition[922]: Ignition 2.19.0 Jan 23 23:56:09.986942 ignition[922]: Stage: fetch Jan 23 23:56:09.987164 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:09.987178 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:09.987286 ignition[922]: parsed url from cmdline: "" Jan 23 23:56:09.987290 ignition[922]: no config URL provided Jan 23 23:56:09.987295 ignition[922]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:56:09.987303 ignition[922]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:56:09.987328 ignition[922]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:56:10.108851 ignition[922]: GET result: OK Jan 23 23:56:10.111223 ignition[922]: config has been read from IMDS userdata Jan 23 23:56:10.111266 ignition[922]: parsing config with SHA512: f88abf7f675d415b4db1b54d369d8c22aaf392d53a3fe948a97a9e5cc3c7b945b61f87af45af157ae2f2ba7b23ba3031ccef447d2140ff81ae48e0c412329d94 Jan 23 23:56:10.115295 unknown[922]: fetched base config from "system" Jan 23 23:56:10.115692 ignition[922]: fetch: fetch complete Jan 23 23:56:10.115302 unknown[922]: fetched base config from "system" Jan 23 23:56:10.115696 ignition[922]: fetch: fetch passed Jan 23 23:56:10.115306 unknown[922]: fetched user config from "azure" Jan 23 23:56:10.115742 ignition[922]: Ignition finished successfully Jan 23 23:56:10.119281 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:56:10.134639 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:56:10.156331 ignition[928]: Ignition 2.19.0 Jan 23 23:56:10.156339 ignition[928]: Stage: kargs Jan 23 23:56:10.160774 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:56:10.156525 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:10.156534 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:10.157625 ignition[928]: kargs: kargs passed Jan 23 23:56:10.157675 ignition[928]: Ignition finished successfully Jan 23 23:56:10.180654 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:56:10.200731 ignition[934]: Ignition 2.19.0 Jan 23 23:56:10.200741 ignition[934]: Stage: disks Jan 23 23:56:10.205071 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:56:10.200913 ignition[934]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:10.212273 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:56:10.200923 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:10.221547 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:56:10.201820 ignition[934]: disks: disks passed Jan 23 23:56:10.230467 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:56:10.201873 ignition[934]: Ignition finished successfully Jan 23 23:56:10.239013 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:56:10.247745 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:56:10.266675 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:56:10.358706 systemd-fsck[943]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:56:10.366588 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:56:10.381610 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:56:10.431446 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:56:10.431054 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:56:10.435332 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:56:10.478490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:56:10.502644 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Jan 23 23:56:10.502678 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:10.507657 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:10.511537 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:56:10.523344 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:56:10.519629 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:56:10.527976 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:56:10.538974 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:56:10.539492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:56:10.555697 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:56:10.563333 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:56:10.576680 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:56:10.954525 systemd-networkd[904]: eth0: Gained IPv6LL Jan 23 23:56:11.028243 coreos-metadata[971]: Jan 23 23:56:11.028 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:56:11.036444 coreos-metadata[971]: Jan 23 23:56:11.036 INFO Fetch successful Jan 23 23:56:11.040718 coreos-metadata[971]: Jan 23 23:56:11.040 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:56:11.060712 coreos-metadata[971]: Jan 23 23:56:11.060 INFO Fetch successful Jan 23 23:56:11.077470 coreos-metadata[971]: Jan 23 23:56:11.077 INFO wrote hostname ci-4081.3.6-n-31deed6810 to /sysroot/etc/hostname Jan 23 23:56:11.085043 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:56:11.295766 initrd-setup-root[983]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:56:11.319464 initrd-setup-root[990]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:56:11.342287 initrd-setup-root[997]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:56:11.347968 initrd-setup-root[1004]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:56:12.237383 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:56:12.252613 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:56:12.269755 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:56:12.276701 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:12.278782 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:56:12.309176 ignition[1071]: INFO : Ignition 2.19.0 Jan 23 23:56:12.309176 ignition[1071]: INFO : Stage: mount Jan 23 23:56:12.309176 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:12.309176 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:12.304728 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:56:12.333981 ignition[1071]: INFO : mount: mount passed Jan 23 23:56:12.333981 ignition[1071]: INFO : Ignition finished successfully Jan 23 23:56:12.314280 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:56:12.341629 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:56:12.355949 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:56:12.380427 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1083) Jan 23 23:56:12.398680 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:12.398731 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:12.401891 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:56:12.408419 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:56:12.410070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:56:12.440116 ignition[1100]: INFO : Ignition 2.19.0 Jan 23 23:56:12.444168 ignition[1100]: INFO : Stage: files Jan 23 23:56:12.444168 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:12.451440 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:12.451440 ignition[1100]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:56:12.461985 ignition[1100]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:56:12.461985 ignition[1100]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:56:12.558813 ignition[1100]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:56:12.564782 ignition[1100]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:56:12.564782 ignition[1100]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:56:12.559832 unknown[1100]: wrote ssh authorized keys file for user: core Jan 23 23:56:12.585492 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:56:12.594068 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:56:12.650365 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:56:12.969462 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:56:13.029488 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 23:56:13.509824 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:56:13.785166 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:56:13.785166 ignition[1100]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:56:13.801339 ignition[1100]: INFO : files: files passed Jan 23 23:56:13.801339 ignition[1100]: INFO : Ignition finished successfully Jan 23 23:56:13.796701 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:56:13.833866 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:56:13.841606 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:56:13.853294 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:56:13.909850 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:13.909850 initrd-setup-root-after-ignition[1127]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:13.853450 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:56:13.935564 initrd-setup-root-after-ignition[1131]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:13.881907 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:56:13.890781 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:56:13.916683 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:56:13.961284 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:56:13.961402 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:56:13.971487 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:56:13.980932 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:56:13.989495 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:56:14.004797 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:56:14.019912 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:56:14.033860 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:56:14.048881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:14.054717 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:14.064906 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:56:14.074056 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:56:14.074177 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:56:14.086816 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:56:14.091387 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:56:14.100276 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:56:14.109357 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:56:14.118360 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:56:14.127976 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:56:14.137252 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:56:14.146887 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:56:14.155742 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:56:14.165485 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:56:14.173237 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:56:14.173359 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:56:14.185237 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:14.193749 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:14.203260 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:56:14.207392 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:14.213126 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:56:14.213265 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:56:14.226887 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:56:14.227012 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:56:14.232470 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:56:14.232561 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:56:14.242640 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:56:14.242732 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:56:14.267693 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:56:14.274498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:56:14.274655 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:14.320635 ignition[1152]: INFO : Ignition 2.19.0 Jan 23 23:56:14.320635 ignition[1152]: INFO : Stage: umount Jan 23 23:56:14.320635 ignition[1152]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:14.320635 ignition[1152]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:56:14.320635 ignition[1152]: INFO : umount: umount passed Jan 23 23:56:14.320635 ignition[1152]: INFO : Ignition finished successfully Jan 23 23:56:14.305474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:56:14.315035 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:56:14.315207 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:14.320854 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:56:14.320955 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:56:14.328587 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:56:14.329212 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:56:14.330455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:56:14.335938 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:56:14.336203 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:56:14.344836 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:56:14.344906 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:56:14.350286 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:56:14.350335 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:56:14.362011 systemd[1]: Stopped target network.target - Network. Jan 23 23:56:14.372986 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:56:14.373054 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:56:14.381664 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:56:14.390705 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:56:14.394331 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:14.399866 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:56:14.408029 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:56:14.418396 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:56:14.418457 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:56:14.426612 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:56:14.426658 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:56:14.437581 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:56:14.437641 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:56:14.445623 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:56:14.445663 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:56:14.454334 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:56:14.463752 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:56:14.472360 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:56:14.474439 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:56:14.670364 kernel: hv_netvsc 000d3afd-dab2-000d-3afd-dab2000d3afd eth0: Data path switched from VF: enP43060s1 Jan 23 23:56:14.475819 systemd-networkd[904]: eth0: DHCPv6 lease lost Jan 23 23:56:14.482846 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:56:14.482970 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:56:14.493242 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:56:14.494434 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:56:14.504739 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:56:14.504791 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:14.521714 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:56:14.528763 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:56:14.528838 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:56:14.539033 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:56:14.539085 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:14.547076 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:56:14.547116 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:14.557105 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:56:14.557147 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:14.569138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:14.598940 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:56:14.599132 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:14.605610 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:56:14.605660 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:14.613600 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:56:14.613629 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:14.622377 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:56:14.622432 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:56:14.636065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:56:14.636148 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:56:14.655926 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:56:14.655990 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:14.685544 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:56:14.694993 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:56:14.695060 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:14.704774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:14.704816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:14.720553 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:56:14.720648 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:56:14.729543 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:56:14.729635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:56:14.739889 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:56:14.739993 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:56:14.754744 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:56:14.754896 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:56:14.763567 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:56:14.781643 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:56:14.960664 systemd[1]: Switching root. Jan 23 23:56:15.049016 systemd-journald[217]: Journal stopped Jan 23 23:56:20.055734 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 23 23:56:20.055762 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:56:20.055773 kernel: SELinux: policy capability open_perms=1 Jan 23 23:56:20.055784 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:56:20.055792 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:56:20.055801 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:56:20.055810 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:56:20.055819 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:56:20.055828 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:56:20.055837 kernel: audit: type=1403 audit(1769212576.317:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:56:20.055850 systemd[1]: Successfully loaded SELinux policy in 180.980ms. Jan 23 23:56:20.055860 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.630ms. Jan 23 23:56:20.055871 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:56:20.055881 systemd[1]: Detected virtualization microsoft. Jan 23 23:56:20.055891 systemd[1]: Detected architecture arm64. Jan 23 23:56:20.055904 systemd[1]: Detected first boot. Jan 23 23:56:20.055914 systemd[1]: Hostname set to . Jan 23 23:56:20.055924 systemd[1]: Initializing machine ID from random generator. Jan 23 23:56:20.055936 zram_generator::config[1194]: No configuration found. Jan 23 23:56:20.055946 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:56:20.055958 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:56:20.055971 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:56:20.055981 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:56:20.055992 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:56:20.056002 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:56:20.056013 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:56:20.056024 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:56:20.056033 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:56:20.056045 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:56:20.056056 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:56:20.056065 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:56:20.056075 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:20.056085 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:20.056095 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:56:20.056105 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:56:20.056116 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:56:20.056126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:56:20.056137 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 23:56:20.056147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:20.056157 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:56:20.056169 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:56:20.056180 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:56:20.056190 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:56:20.056199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:20.056211 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:56:20.056220 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:56:20.056230 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:56:20.056240 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:56:20.056249 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:56:20.056259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:20.056270 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:20.056283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:20.056294 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:56:20.056304 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:56:20.056314 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:56:20.056324 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:56:20.056334 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:56:20.056347 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:56:20.056357 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:56:20.056367 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:56:20.056377 systemd[1]: Reached target machines.target - Containers. Jan 23 23:56:20.056387 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:56:20.056397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:20.056413 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:56:20.056424 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:56:20.056437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:56:20.056447 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:56:20.056457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:56:20.056467 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:56:20.056477 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:56:20.056488 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:56:20.056498 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:56:20.056509 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:56:20.056520 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:56:20.056533 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:56:20.056543 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:56:20.056554 kernel: fuse: init (API version 7.39) Jan 23 23:56:20.056564 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:56:20.056575 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:56:20.056585 kernel: loop: module loaded Jan 23 23:56:20.056610 systemd-journald[1297]: Collecting audit messages is disabled. Jan 23 23:56:20.056633 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:56:20.056644 systemd-journald[1297]: Journal started Jan 23 23:56:20.056666 systemd-journald[1297]: Runtime Journal (/run/log/journal/2bb04978bcf844a6ab458c5e92cc2655) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:56:19.230713 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:56:19.367196 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 23:56:19.367569 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:56:19.367864 systemd[1]: systemd-journald.service: Consumed 2.494s CPU time. Jan 23 23:56:20.081434 kernel: ACPI: bus type drm_connector registered Jan 23 23:56:20.093616 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:56:20.100653 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:56:20.100681 systemd[1]: Stopped verity-setup.service. Jan 23 23:56:20.116784 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:56:20.117771 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:56:20.122696 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:56:20.127656 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:56:20.132058 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:56:20.137238 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:56:20.142353 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:56:20.146821 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:56:20.152343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:20.158158 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:56:20.158296 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:56:20.163812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:56:20.163969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:56:20.169437 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:56:20.169571 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:56:20.174773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:56:20.174930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:56:20.180653 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:56:20.180787 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:56:20.186076 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:56:20.186223 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:56:20.191331 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:20.196934 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:56:20.202918 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:56:20.209011 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:20.226555 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:56:20.242550 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:56:20.249073 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:56:20.254210 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:56:20.254253 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:56:20.260156 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:56:20.267030 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:56:20.273304 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:56:20.278205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:20.305612 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:56:20.311836 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:56:20.317580 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:56:20.318676 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:56:20.323811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:56:20.324941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:56:20.332624 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:56:20.340597 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:56:20.347620 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:56:20.358263 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:56:20.368640 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:56:20.374444 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:56:20.384842 systemd-journald[1297]: Time spent on flushing to /var/log/journal/2bb04978bcf844a6ab458c5e92cc2655 is 40.209ms for 897 entries. Jan 23 23:56:20.384842 systemd-journald[1297]: System Journal (/var/log/journal/2bb04978bcf844a6ab458c5e92cc2655) is 11.8M, max 2.6G, 2.6G free. Jan 23 23:56:20.466835 systemd-journald[1297]: Received client request to flush runtime journal. Jan 23 23:56:20.466886 systemd-journald[1297]: /var/log/journal/2bb04978bcf844a6ab458c5e92cc2655/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 23 23:56:20.466912 systemd-journald[1297]: Rotating system journal. Jan 23 23:56:20.382017 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:56:20.399527 udevadm[1331]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 23 23:56:20.400240 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:56:20.423665 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:56:20.469480 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:56:20.478569 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:56:20.479209 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:56:20.490604 kernel: loop0: detected capacity change from 0 to 31320 Jan 23 23:56:20.512955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:20.541380 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:56:20.555585 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:56:20.636178 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Jan 23 23:56:20.636602 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Jan 23 23:56:20.642452 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:20.933548 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:56:21.018433 kernel: loop1: detected capacity change from 0 to 114432 Jan 23 23:56:21.180678 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:56:21.189671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:21.216147 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jan 23 23:56:21.364166 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:21.380327 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:56:21.394433 kernel: loop2: detected capacity change from 0 to 114328 Jan 23 23:56:21.418636 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:56:21.457729 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 23:56:21.481932 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:56:21.536432 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 23:56:21.566433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#75 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:56:21.578433 kernel: hv_vmbus: registering driver hv_balloon Jan 23 23:56:21.586902 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 23:56:21.586981 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 23:56:21.590806 systemd-networkd[1363]: lo: Link UP Jan 23 23:56:21.590815 systemd-networkd[1363]: lo: Gained carrier Jan 23 23:56:21.593894 systemd-networkd[1363]: Enumeration completed Jan 23 23:56:21.593990 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:56:21.599225 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:21.599233 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:56:21.611475 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 23:56:21.611650 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:56:21.621266 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 23:56:21.621317 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 23:56:21.628460 kernel: Console: switching to colour dummy device 80x25 Jan 23 23:56:21.639457 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:56:21.646320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:21.678449 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1359) Jan 23 23:56:21.697420 kernel: mlx5_core a834:00:02.0 enP43060s1: Link up Jan 23 23:56:21.699941 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:21.701652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:21.727572 kernel: hv_netvsc 000d3afd-dab2-000d-3afd-dab2000d3afd eth0: Data path switched to VF: enP43060s1 Jan 23 23:56:21.722719 systemd-networkd[1363]: enP43060s1: Link UP Jan 23 23:56:21.722838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:21.722996 systemd-networkd[1363]: eth0: Link UP Jan 23 23:56:21.723000 systemd-networkd[1363]: eth0: Gained carrier Jan 23 23:56:21.723017 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:21.738075 systemd-networkd[1363]: enP43060s1: Gained carrier Jan 23 23:56:21.743271 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:56:21.759191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:56:21.768569 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:56:21.787427 kernel: loop3: detected capacity change from 0 to 200800 Jan 23 23:56:21.813512 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:56:21.823426 kernel: loop4: detected capacity change from 0 to 31320 Jan 23 23:56:21.830931 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:56:21.837520 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:56:21.846557 kernel: loop5: detected capacity change from 0 to 114432 Jan 23 23:56:21.861439 kernel: loop6: detected capacity change from 0 to 114328 Jan 23 23:56:21.873445 kernel: loop7: detected capacity change from 0 to 200800 Jan 23 23:56:21.885787 (sd-merge)[1451]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 23:56:21.886229 (sd-merge)[1451]: Merged extensions into '/usr'. Jan 23 23:56:21.889893 systemd[1]: Reloading requested from client PID 1328 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:56:21.889905 systemd[1]: Reloading... Jan 23 23:56:21.923287 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:56:21.961467 zram_generator::config[1483]: No configuration found. Jan 23 23:56:22.114604 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:22.196904 systemd[1]: Reloading finished in 306 ms. Jan 23 23:56:22.225391 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:56:22.232606 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:56:22.241278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:22.261649 systemd[1]: Starting ensure-sysext.service... Jan 23 23:56:22.266236 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:56:22.272609 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:56:22.279265 lvm[1542]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:56:22.282108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:22.294237 systemd[1]: Reloading requested from client PID 1541 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:56:22.294256 systemd[1]: Reloading... Jan 23 23:56:22.326331 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:56:22.326624 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:56:22.327268 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:56:22.327911 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Jan 23 23:56:22.328026 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Jan 23 23:56:22.346884 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:56:22.347046 systemd-tmpfiles[1543]: Skipping /boot Jan 23 23:56:22.357480 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:56:22.357607 systemd-tmpfiles[1543]: Skipping /boot Jan 23 23:56:22.373438 zram_generator::config[1575]: No configuration found. Jan 23 23:56:22.482381 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:22.565997 systemd[1]: Reloading finished in 271 ms. Jan 23 23:56:22.589936 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:56:22.596099 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:22.614731 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:22.621246 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:56:22.628766 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:56:22.649841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:56:22.656693 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:56:22.669772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:22.675727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:56:22.692731 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:56:22.700757 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:56:22.706783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:22.709515 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:56:22.716709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:56:22.716874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:56:22.724997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:56:22.725152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:56:22.732488 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:56:22.739055 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:56:22.739193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:56:22.751210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:22.758731 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:56:22.768706 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:56:22.775550 systemd-resolved[1643]: Positive Trust Anchors: Jan 23 23:56:22.775867 systemd-resolved[1643]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:56:22.775954 systemd-resolved[1643]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:56:22.777345 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:56:22.785734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:56:22.790215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:22.790386 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:56:22.795728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:56:22.795883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:56:22.801655 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:56:22.801799 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:56:22.806951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:56:22.807086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:56:22.811361 systemd-resolved[1643]: Using system hostname 'ci-4081.3.6-n-31deed6810'. Jan 23 23:56:22.812926 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:56:22.813075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:56:22.818314 augenrules[1665]: No rules Jan 23 23:56:22.818354 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:56:22.823910 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:22.831450 systemd[1]: Finished ensure-sysext.service. Jan 23 23:56:22.838251 systemd[1]: Reached target network.target - Network. Jan 23 23:56:22.842012 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:22.846847 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:56:22.846908 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:56:23.131251 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:56:23.137075 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:56:23.242521 systemd-networkd[1363]: eth0: Gained IPv6LL Jan 23 23:56:23.244238 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:56:23.250484 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:56:25.521442 ldconfig[1323]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:56:25.531130 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:56:25.540655 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:56:25.553959 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:56:25.559129 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:56:25.563735 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:56:25.569007 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:56:25.574669 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:56:25.579142 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:56:25.584651 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:56:25.590272 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:56:25.590304 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:56:25.594087 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:56:25.598974 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:56:25.605319 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:56:25.614068 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:56:25.619504 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:56:25.624478 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:56:25.628608 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:56:25.632920 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:56:25.632948 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:56:25.654526 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 23:56:25.661519 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:56:25.675515 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:56:25.681606 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:56:25.686609 (chronyd)[1684]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 23 23:56:25.690181 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:56:25.696579 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:56:25.698192 jq[1689]: false Jan 23 23:56:25.700879 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:56:25.700915 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 23 23:56:25.702614 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 23:56:25.712377 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 23:56:25.714144 KVP[1692]: KVP starting; pid is:1692 Jan 23 23:56:25.716516 chronyd[1695]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 23 23:56:25.719936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:25.725545 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:56:25.732603 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:56:25.737965 chronyd[1695]: Timezone right/UTC failed leap second check, ignoring Jan 23 23:56:25.738156 chronyd[1695]: Loaded seccomp filter (level 2) Jan 23 23:56:25.739595 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:56:25.749735 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:56:25.759225 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:56:25.768284 extend-filesystems[1691]: Found loop4 Jan 23 23:56:25.774651 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:56:25.776859 extend-filesystems[1691]: Found loop5 Jan 23 23:56:25.776859 extend-filesystems[1691]: Found loop6 Jan 23 23:56:25.776859 extend-filesystems[1691]: Found loop7 Jan 23 23:56:25.776859 extend-filesystems[1691]: Found sda Jan 23 23:56:25.776859 extend-filesystems[1691]: Found sda1 Jan 23 23:56:25.776859 extend-filesystems[1691]: Found sda2 Jan 23 23:56:25.776859 extend-filesystems[1691]: Found sda3 Jan 23 23:56:25.776859 extend-filesystems[1691]: Found usr Jan 23 23:56:25.776859 extend-filesystems[1691]: Found sda4 Jan 23 23:56:25.776859 extend-filesystems[1691]: Found sda6 Jan 23 23:56:25.776859 extend-filesystems[1691]: Found sda7 Jan 23 23:56:25.776859 extend-filesystems[1691]: Found sda9 Jan 23 23:56:25.776859 extend-filesystems[1691]: Checking size of /dev/sda9 Jan 23 23:56:25.902970 kernel: hv_utils: KVP IC version 4.0 Jan 23 23:56:25.783537 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:56:25.903188 extend-filesystems[1691]: Old size kept for /dev/sda9 Jan 23 23:56:25.903188 extend-filesystems[1691]: Found sr0 Jan 23 23:56:25.797884 KVP[1692]: KVP LIC Version: 3.1 Jan 23 23:56:25.784059 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:56:25.894332 dbus-daemon[1687]: [system] SELinux support is enabled Jan 23 23:56:25.790619 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:56:25.948077 update_engine[1713]: I20260123 23:56:25.914724 1713 main.cc:92] Flatcar Update Engine starting Jan 23 23:56:25.948077 update_engine[1713]: I20260123 23:56:25.928598 1713 update_check_scheduler.cc:74] Next update check in 4m53s Jan 23 23:56:25.809882 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:56:25.948349 jq[1715]: true Jan 23 23:56:25.820038 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 23:56:25.847091 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:56:25.847272 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:56:25.848817 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:56:25.848984 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:56:25.875340 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:56:25.921724 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:56:25.932388 systemd-logind[1709]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 23:56:25.932596 systemd-logind[1709]: New seat seat0. Jan 23 23:56:25.942598 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:56:25.954213 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:56:25.954456 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:56:25.972926 coreos-metadata[1686]: Jan 23 23:56:25.972 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:56:25.988616 coreos-metadata[1686]: Jan 23 23:56:25.978 INFO Fetch successful Jan 23 23:56:25.988616 coreos-metadata[1686]: Jan 23 23:56:25.978 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 23:56:25.988616 coreos-metadata[1686]: Jan 23 23:56:25.983 INFO Fetch successful Jan 23 23:56:25.988616 coreos-metadata[1686]: Jan 23 23:56:25.983 INFO Fetching http://168.63.129.16/machine/3bc7bafa-5d88-407e-bbef-bf01a79fc762/6130a4c0%2D5e1c%2D49b0%2D89c8%2Dc8ca7f3d9362.%5Fci%2D4081.3.6%2Dn%2D31deed6810?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 23:56:25.980856 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:56:25.981479 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:56:26.006137 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:56:26.006202 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:56:26.011158 (ntainerd)[1738]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:56:26.012510 dbus-daemon[1687]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:56:26.015512 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:56:26.015545 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:56:26.018202 jq[1737]: true Jan 23 23:56:26.018883 coreos-metadata[1686]: Jan 23 23:56:26.018 INFO Fetch successful Jan 23 23:56:26.018883 coreos-metadata[1686]: Jan 23 23:56:26.018 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:56:26.022816 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:56:26.034105 coreos-metadata[1686]: Jan 23 23:56:26.032 INFO Fetch successful Jan 23 23:56:26.034745 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:56:26.058539 tar[1732]: linux-arm64/LICENSE Jan 23 23:56:26.058539 tar[1732]: linux-arm64/helm Jan 23 23:56:26.100783 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:56:26.108763 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:56:26.164280 bash[1777]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:56:26.171563 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:56:26.188920 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 23:56:26.193585 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1731) Jan 23 23:56:26.349877 locksmithd[1755]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:56:26.677818 tar[1732]: linux-arm64/README.md Jan 23 23:56:26.693925 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:56:26.810841 containerd[1738]: time="2026-01-23T23:56:26.810747920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:56:26.868486 containerd[1738]: time="2026-01-23T23:56:26.868430800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:26.873811 containerd[1738]: time="2026-01-23T23:56:26.873765880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:26.873811 containerd[1738]: time="2026-01-23T23:56:26.873809000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:56:26.873903 containerd[1738]: time="2026-01-23T23:56:26.873829320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:56:26.874015 containerd[1738]: time="2026-01-23T23:56:26.873997600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:56:26.874043 containerd[1738]: time="2026-01-23T23:56:26.874019480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874104 containerd[1738]: time="2026-01-23T23:56:26.874087120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874135 containerd[1738]: time="2026-01-23T23:56:26.874102720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874282 containerd[1738]: time="2026-01-23T23:56:26.874262200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874307 containerd[1738]: time="2026-01-23T23:56:26.874281480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874307 containerd[1738]: time="2026-01-23T23:56:26.874297120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874340 containerd[1738]: time="2026-01-23T23:56:26.874307800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874391 containerd[1738]: time="2026-01-23T23:56:26.874376040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874617 containerd[1738]: time="2026-01-23T23:56:26.874597000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874718 containerd[1738]: time="2026-01-23T23:56:26.874700360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:26.874718 containerd[1738]: time="2026-01-23T23:56:26.874716720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:56:26.874804 containerd[1738]: time="2026-01-23T23:56:26.874788560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:56:26.874845 containerd[1738]: time="2026-01-23T23:56:26.874831760Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:56:26.894817 containerd[1738]: time="2026-01-23T23:56:26.894256080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.894982680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.895050120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.895080040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.895246640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.895464800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.895885000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.896004080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.896024840Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.896039840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.896060280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.896078440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.896095440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.896113680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:56:26.896446 containerd[1738]: time="2026-01-23T23:56:26.896132760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896149280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896162680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896179560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896204440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896223000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896239800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896257200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896274000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896291400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896304960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896321360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896337880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896360360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896751 containerd[1738]: time="2026-01-23T23:56:26.896376360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.896658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:26.897056 containerd[1738]: time="2026-01-23T23:56:26.896392040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.899213 containerd[1738]: time="2026-01-23T23:56:26.896407800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.899213 containerd[1738]: time="2026-01-23T23:56:26.898581520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:56:26.899213 containerd[1738]: time="2026-01-23T23:56:26.898699960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.899213 containerd[1738]: time="2026-01-23T23:56:26.898721680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.899213 containerd[1738]: time="2026-01-23T23:56:26.898734760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:56:26.899379 containerd[1738]: time="2026-01-23T23:56:26.898886640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:56:26.899482 containerd[1738]: time="2026-01-23T23:56:26.899453560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:56:26.899549 containerd[1738]: time="2026-01-23T23:56:26.899537000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:56:26.902664 containerd[1738]: time="2026-01-23T23:56:26.902627440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:56:26.902664 containerd[1738]: time="2026-01-23T23:56:26.902661640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.902756 containerd[1738]: time="2026-01-23T23:56:26.902685680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:56:26.902756 containerd[1738]: time="2026-01-23T23:56:26.902698200Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:56:26.902756 containerd[1738]: time="2026-01-23T23:56:26.902708920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:56:26.903111 containerd[1738]: time="2026-01-23T23:56:26.902999320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:56:26.903285 containerd[1738]: time="2026-01-23T23:56:26.903115000Z" level=info msg="Connect containerd service" Jan 23 23:56:26.903285 containerd[1738]: time="2026-01-23T23:56:26.903154000Z" level=info msg="using legacy CRI server" Jan 23 23:56:26.903285 containerd[1738]: time="2026-01-23T23:56:26.903167240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:56:26.903285 containerd[1738]: time="2026-01-23T23:56:26.903258800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:56:26.905420 containerd[1738]: time="2026-01-23T23:56:26.903849760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:56:26.905420 containerd[1738]: time="2026-01-23T23:56:26.904123480Z" level=info msg="Start subscribing containerd event" Jan 23 23:56:26.905420 containerd[1738]: time="2026-01-23T23:56:26.904195400Z" level=info msg="Start recovering state" Jan 23 23:56:26.907821 containerd[1738]: time="2026-01-23T23:56:26.904148600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:56:26.907821 containerd[1738]: time="2026-01-23T23:56:26.905638240Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:56:26.908103 containerd[1738]: time="2026-01-23T23:56:26.908067840Z" level=info msg="Start event monitor" Jan 23 23:56:26.908129 containerd[1738]: time="2026-01-23T23:56:26.908106040Z" level=info msg="Start snapshots syncer" Jan 23 23:56:26.908129 containerd[1738]: time="2026-01-23T23:56:26.908117520Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:56:26.908129 containerd[1738]: time="2026-01-23T23:56:26.908126120Z" level=info msg="Start streaming server" Jan 23 23:56:26.908265 containerd[1738]: time="2026-01-23T23:56:26.908246200Z" level=info msg="containerd successfully booted in 0.099487s" Jan 23 23:56:26.911543 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:56:26.916398 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:27.167543 sshd_keygen[1717]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:56:27.198677 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:56:27.212073 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:56:27.218668 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 23:56:27.224788 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:56:27.226458 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:56:27.247735 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:56:27.260380 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 23:56:27.274270 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:56:27.285759 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:56:27.297724 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 23:56:27.304763 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:56:27.308969 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:56:27.314636 systemd[1]: Startup finished in 613ms (kernel) + 11.833s (initrd) + 11.176s (userspace) = 23.623s. Jan 23 23:56:27.339880 kubelet[1821]: E0123 23:56:27.339840 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:27.342492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:27.342638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:27.906786 login[1850]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:27.914003 login[1851]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:27.917031 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:56:27.926796 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:56:27.929424 systemd-logind[1709]: New session 1 of user core. Jan 23 23:56:27.933183 systemd-logind[1709]: New session 2 of user core. Jan 23 23:56:27.959798 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:56:27.964736 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:56:27.984986 (systemd)[1859]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:56:28.128649 systemd[1859]: Queued start job for default target default.target. Jan 23 23:56:28.136283 systemd[1859]: Created slice app.slice - User Application Slice. Jan 23 23:56:28.136469 systemd[1859]: Reached target paths.target - Paths. Jan 23 23:56:28.136547 systemd[1859]: Reached target timers.target - Timers. Jan 23 23:56:28.137777 systemd[1859]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:56:28.150342 systemd[1859]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:56:28.150479 systemd[1859]: Reached target sockets.target - Sockets. Jan 23 23:56:28.150493 systemd[1859]: Reached target basic.target - Basic System. Jan 23 23:56:28.150535 systemd[1859]: Reached target default.target - Main User Target. Jan 23 23:56:28.150562 systemd[1859]: Startup finished in 158ms. Jan 23 23:56:28.150664 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:56:28.160576 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:56:28.161306 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:56:29.283640 waagent[1846]: 2026-01-23T23:56:29.283549Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 23 23:56:29.287992 waagent[1846]: 2026-01-23T23:56:29.287934Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 23 23:56:29.291426 waagent[1846]: 2026-01-23T23:56:29.291376Z INFO Daemon Daemon Python: 3.11.9 Jan 23 23:56:29.294946 waagent[1846]: 2026-01-23T23:56:29.294744Z INFO Daemon Daemon Run daemon Jan 23 23:56:29.297812 waagent[1846]: 2026-01-23T23:56:29.297773Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 23 23:56:29.304719 waagent[1846]: 2026-01-23T23:56:29.304668Z INFO Daemon Daemon Using waagent for provisioning Jan 23 23:56:29.308984 waagent[1846]: 2026-01-23T23:56:29.308942Z INFO Daemon Daemon Activate resource disk Jan 23 23:56:29.312555 waagent[1846]: 2026-01-23T23:56:29.312514Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 23:56:29.321990 waagent[1846]: 2026-01-23T23:56:29.321932Z INFO Daemon Daemon Found device: None Jan 23 23:56:29.325433 waagent[1846]: 2026-01-23T23:56:29.325382Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 23:56:29.331859 waagent[1846]: 2026-01-23T23:56:29.331816Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 23:56:29.342336 waagent[1846]: 2026-01-23T23:56:29.342282Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:56:29.346849 waagent[1846]: 2026-01-23T23:56:29.346804Z INFO Daemon Daemon Running default provisioning handler Jan 23 23:56:29.357997 waagent[1846]: 2026-01-23T23:56:29.357927Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 23:56:29.368552 waagent[1846]: 2026-01-23T23:56:29.368493Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 23:56:29.375725 waagent[1846]: 2026-01-23T23:56:29.375678Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 23:56:29.379726 waagent[1846]: 2026-01-23T23:56:29.379685Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 23:56:29.502699 waagent[1846]: 2026-01-23T23:56:29.502235Z INFO Daemon Daemon Successfully mounted dvd Jan 23 23:56:29.531527 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 23:56:29.532087 waagent[1846]: 2026-01-23T23:56:29.531898Z INFO Daemon Daemon Detect protocol endpoint Jan 23 23:56:29.535705 waagent[1846]: 2026-01-23T23:56:29.535608Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:56:29.540126 waagent[1846]: 2026-01-23T23:56:29.540080Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 23:56:29.545085 waagent[1846]: 2026-01-23T23:56:29.545045Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 23:56:29.549176 waagent[1846]: 2026-01-23T23:56:29.549134Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 23:56:29.553108 waagent[1846]: 2026-01-23T23:56:29.553068Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 23:56:29.600705 waagent[1846]: 2026-01-23T23:56:29.600653Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 23:56:29.606141 waagent[1846]: 2026-01-23T23:56:29.606114Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 23:56:29.610067 waagent[1846]: 2026-01-23T23:56:29.610026Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 23:56:30.084446 waagent[1846]: 2026-01-23T23:56:30.083630Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 23:56:30.089167 waagent[1846]: 2026-01-23T23:56:30.089109Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 23:56:30.097208 waagent[1846]: 2026-01-23T23:56:30.097159Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:56:30.117441 waagent[1846]: 2026-01-23T23:56:30.117386Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 23:56:30.122021 waagent[1846]: 2026-01-23T23:56:30.121972Z INFO Daemon Jan 23 23:56:30.124471 waagent[1846]: 2026-01-23T23:56:30.124434Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bf1ed7f8-d43c-4679-92c4-fde0a68af88e eTag: 2393695326628120593 source: Fabric] Jan 23 23:56:30.133082 waagent[1846]: 2026-01-23T23:56:30.133041Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 23:56:30.138462 waagent[1846]: 2026-01-23T23:56:30.138406Z INFO Daemon Jan 23 23:56:30.140603 waagent[1846]: 2026-01-23T23:56:30.140567Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:56:30.149862 waagent[1846]: 2026-01-23T23:56:30.149829Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 23:56:30.295857 waagent[1846]: 2026-01-23T23:56:30.295768Z INFO Daemon Downloaded certificate {'thumbprint': 'F2A65AB462B295254FBA0EC237849B2CB08B0F58', 'hasPrivateKey': True} Jan 23 23:56:30.303871 waagent[1846]: 2026-01-23T23:56:30.303820Z INFO Daemon Fetch goal state completed Jan 23 23:56:30.350947 waagent[1846]: 2026-01-23T23:56:30.350865Z INFO Daemon Daemon Starting provisioning Jan 23 23:56:30.355166 waagent[1846]: 2026-01-23T23:56:30.355103Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 23:56:30.358975 waagent[1846]: 2026-01-23T23:56:30.358930Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-31deed6810] Jan 23 23:56:30.380428 waagent[1846]: 2026-01-23T23:56:30.379592Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-31deed6810] Jan 23 23:56:30.384557 waagent[1846]: 2026-01-23T23:56:30.384500Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 23:56:30.389386 waagent[1846]: 2026-01-23T23:56:30.389343Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 23:56:30.432134 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:30.432141 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:56:30.432170 systemd-networkd[1363]: eth0: DHCP lease lost Jan 23 23:56:30.437445 waagent[1846]: 2026-01-23T23:56:30.433450Z INFO Daemon Daemon Create user account if not exists Jan 23 23:56:30.438062 waagent[1846]: 2026-01-23T23:56:30.438010Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 23:56:30.442551 waagent[1846]: 2026-01-23T23:56:30.442501Z INFO Daemon Daemon Configure sudoer Jan 23 23:56:30.442632 systemd-networkd[1363]: eth0: DHCPv6 lease lost Jan 23 23:56:30.446241 waagent[1846]: 2026-01-23T23:56:30.446169Z INFO Daemon Daemon Configure sshd Jan 23 23:56:30.449871 waagent[1846]: 2026-01-23T23:56:30.449816Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 23:56:30.459855 waagent[1846]: 2026-01-23T23:56:30.459800Z INFO Daemon Daemon Deploy ssh public key. Jan 23 23:56:30.477475 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:56:31.568417 waagent[1846]: 2026-01-23T23:56:31.568357Z INFO Daemon Daemon Provisioning complete Jan 23 23:56:31.584575 waagent[1846]: 2026-01-23T23:56:31.584531Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 23:56:31.589468 waagent[1846]: 2026-01-23T23:56:31.589423Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 23:56:31.596917 waagent[1846]: 2026-01-23T23:56:31.596877Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 23 23:56:31.730629 waagent[1911]: 2026-01-23T23:56:31.729982Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 23 23:56:31.730629 waagent[1911]: 2026-01-23T23:56:31.730130Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 23 23:56:31.730629 waagent[1911]: 2026-01-23T23:56:31.730182Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 23 23:56:31.774118 waagent[1911]: 2026-01-23T23:56:31.774039Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 23 23:56:31.774455 waagent[1911]: 2026-01-23T23:56:31.774400Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:56:31.774597 waagent[1911]: 2026-01-23T23:56:31.774565Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:56:31.782561 waagent[1911]: 2026-01-23T23:56:31.782501Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:56:31.788370 waagent[1911]: 2026-01-23T23:56:31.788325Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 23:56:31.788984 waagent[1911]: 2026-01-23T23:56:31.788945Z INFO ExtHandler Jan 23 23:56:31.789127 waagent[1911]: 2026-01-23T23:56:31.789095Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9f246d28-6267-4555-a18e-a8c9d8e526a0 eTag: 2393695326628120593 source: Fabric] Jan 23 23:56:31.789551 waagent[1911]: 2026-01-23T23:56:31.789512Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:56:31.790209 waagent[1911]: 2026-01-23T23:56:31.790165Z INFO ExtHandler Jan 23 23:56:31.790956 waagent[1911]: 2026-01-23T23:56:31.790312Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:56:31.794163 waagent[1911]: 2026-01-23T23:56:31.794132Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:56:31.861798 waagent[1911]: 2026-01-23T23:56:31.861663Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F2A65AB462B295254FBA0EC237849B2CB08B0F58', 'hasPrivateKey': True} Jan 23 23:56:31.862290 waagent[1911]: 2026-01-23T23:56:31.862239Z INFO ExtHandler Fetch goal state completed Jan 23 23:56:31.878806 waagent[1911]: 2026-01-23T23:56:31.878751Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1911 Jan 23 23:56:31.878967 waagent[1911]: 2026-01-23T23:56:31.878931Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 23:56:31.880672 waagent[1911]: 2026-01-23T23:56:31.880630Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 23:56:31.881041 waagent[1911]: 2026-01-23T23:56:31.881005Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 23:56:31.915856 waagent[1911]: 2026-01-23T23:56:31.915812Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 23:56:31.921359 waagent[1911]: 2026-01-23T23:56:31.921295Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 23:56:31.927955 waagent[1911]: 2026-01-23T23:56:31.927894Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 23:56:31.934854 systemd[1]: Reloading requested from client PID 1924 ('systemctl') (unit waagent.service)... Jan 23 23:56:31.935109 systemd[1]: Reloading... Jan 23 23:56:32.005436 zram_generator::config[1956]: No configuration found. Jan 23 23:56:32.116403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:32.206565 systemd[1]: Reloading finished in 271 ms. Jan 23 23:56:32.229804 waagent[1911]: 2026-01-23T23:56:32.229721Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 23 23:56:32.237224 systemd[1]: Reloading requested from client PID 2014 ('systemctl') (unit waagent.service)... Jan 23 23:56:32.237237 systemd[1]: Reloading... Jan 23 23:56:32.310478 zram_generator::config[2051]: No configuration found. Jan 23 23:56:32.420130 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:32.505301 systemd[1]: Reloading finished in 267 ms. Jan 23 23:56:32.527335 waagent[1911]: 2026-01-23T23:56:32.526560Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 23:56:32.527335 waagent[1911]: 2026-01-23T23:56:32.526718Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 23:56:32.859131 waagent[1911]: 2026-01-23T23:56:32.859006Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 23:56:32.860017 waagent[1911]: 2026-01-23T23:56:32.859969Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 23 23:56:32.860831 waagent[1911]: 2026-01-23T23:56:32.860781Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 23:56:32.860961 waagent[1911]: 2026-01-23T23:56:32.860918Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:56:32.861046 waagent[1911]: 2026-01-23T23:56:32.861015Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:56:32.861263 waagent[1911]: 2026-01-23T23:56:32.861225Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 23:56:32.861661 waagent[1911]: 2026-01-23T23:56:32.861606Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 23:56:32.861876 waagent[1911]: 2026-01-23T23:56:32.861731Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 23:56:32.861876 waagent[1911]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 23:56:32.861876 waagent[1911]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 23:56:32.861876 waagent[1911]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 23:56:32.861876 waagent[1911]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:56:32.861876 waagent[1911]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:56:32.861876 waagent[1911]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:56:32.862103 waagent[1911]: 2026-01-23T23:56:32.862061Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:56:32.862467 waagent[1911]: 2026-01-23T23:56:32.862392Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:56:32.862633 waagent[1911]: 2026-01-23T23:56:32.862594Z INFO EnvHandler ExtHandler Configure routes Jan 23 23:56:32.862700 waagent[1911]: 2026-01-23T23:56:32.862674Z INFO EnvHandler ExtHandler Gateway:None Jan 23 23:56:32.862747 waagent[1911]: 2026-01-23T23:56:32.862725Z INFO EnvHandler ExtHandler Routes:None Jan 23 23:56:32.863262 waagent[1911]: 2026-01-23T23:56:32.863216Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 23:56:32.863388 waagent[1911]: 2026-01-23T23:56:32.863316Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 23:56:32.864046 waagent[1911]: 2026-01-23T23:56:32.863857Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 23:56:32.864046 waagent[1911]: 2026-01-23T23:56:32.863923Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 23:56:32.864124 waagent[1911]: 2026-01-23T23:56:32.864082Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 23:56:32.869864 waagent[1911]: 2026-01-23T23:56:32.869800Z INFO ExtHandler ExtHandler Jan 23 23:56:32.870188 waagent[1911]: 2026-01-23T23:56:32.870146Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 9275df4b-c71f-4041-a911-854c57499a06 correlation 75a69c7e-8ae0-4ef3-bb01-fb6d8b07e912 created: 2026-01-23T23:55:33.868340Z] Jan 23 23:56:32.870870 waagent[1911]: 2026-01-23T23:56:32.870831Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:56:32.871487 waagent[1911]: 2026-01-23T23:56:32.871451Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 23 23:56:32.909796 waagent[1911]: 2026-01-23T23:56:32.909738Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E0BF29BE-E446-4D4F-B653-24C741DF9D0A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 23 23:56:32.917547 waagent[1911]: 2026-01-23T23:56:32.917472Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 23:56:32.917547 waagent[1911]: Executing ['ip', '-a', '-o', 'link']: Jan 23 23:56:32.917547 waagent[1911]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 23:56:32.917547 waagent[1911]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fd:da:b2 brd ff:ff:ff:ff:ff:ff Jan 23 23:56:32.917547 waagent[1911]: 3: enP43060s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fd:da:b2 brd ff:ff:ff:ff:ff:ff\ altname enP43060p0s2 Jan 23 23:56:32.917547 waagent[1911]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 23:56:32.917547 waagent[1911]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 23:56:32.917547 waagent[1911]: 2: eth0 inet 10.200.20.33/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 23:56:32.917547 waagent[1911]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 23:56:32.917547 waagent[1911]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 23:56:32.917547 waagent[1911]: 2: eth0 inet6 fe80::20d:3aff:fefd:dab2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 23:56:32.968225 waagent[1911]: 2026-01-23T23:56:32.968158Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 23 23:56:32.968225 waagent[1911]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:56:32.968225 waagent[1911]: pkts bytes target prot opt in out source destination Jan 23 23:56:32.968225 waagent[1911]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:56:32.968225 waagent[1911]: pkts bytes target prot opt in out source destination Jan 23 23:56:32.968225 waagent[1911]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:56:32.968225 waagent[1911]: pkts bytes target prot opt in out source destination Jan 23 23:56:32.968225 waagent[1911]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:56:32.968225 waagent[1911]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:56:32.968225 waagent[1911]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:56:32.971226 waagent[1911]: 2026-01-23T23:56:32.971170Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 23:56:32.971226 waagent[1911]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:56:32.971226 waagent[1911]: pkts bytes target prot opt in out source destination Jan 23 23:56:32.971226 waagent[1911]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:56:32.971226 waagent[1911]: pkts bytes target prot opt in out source destination Jan 23 23:56:32.971226 waagent[1911]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:56:32.971226 waagent[1911]: pkts bytes target prot opt in out source destination Jan 23 23:56:32.971226 waagent[1911]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:56:32.971226 waagent[1911]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:56:32.971226 waagent[1911]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:56:32.971526 waagent[1911]: 2026-01-23T23:56:32.971462Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 23:56:37.593214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:56:37.600579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:37.710178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:37.715008 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:37.746163 kubelet[2141]: E0123 23:56:37.746098 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:37.749609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:37.749880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:47.938474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:56:47.949091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:48.349144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:48.353574 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:48.390678 kubelet[2156]: E0123 23:56:48.390628 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:48.393426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:48.393688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:49.520347 chronyd[1695]: Selected source PHC0 Jan 23 23:56:50.096834 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:56:50.105670 systemd[1]: Started sshd@0-10.200.20.33:22-10.200.16.10:38326.service - OpenSSH per-connection server daemon (10.200.16.10:38326). Jan 23 23:56:50.634952 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 38326 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:50.636263 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:50.641315 systemd-logind[1709]: New session 3 of user core. Jan 23 23:56:50.648778 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:56:51.069172 systemd[1]: Started sshd@1-10.200.20.33:22-10.200.16.10:38334.service - OpenSSH per-connection server daemon (10.200.16.10:38334). Jan 23 23:56:51.559022 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 38334 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:51.560528 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:51.565586 systemd-logind[1709]: New session 4 of user core. Jan 23 23:56:51.571599 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:56:51.912809 sshd[2169]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:51.916325 systemd[1]: sshd@1-10.200.20.33:22-10.200.16.10:38334.service: Deactivated successfully. Jan 23 23:56:51.917996 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:56:51.918748 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:56:51.919970 systemd-logind[1709]: Removed session 4. Jan 23 23:56:51.988396 systemd[1]: Started sshd@2-10.200.20.33:22-10.200.16.10:38336.service - OpenSSH per-connection server daemon (10.200.16.10:38336). Jan 23 23:56:52.439934 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 38336 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:52.441238 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:52.444912 systemd-logind[1709]: New session 5 of user core. Jan 23 23:56:52.456790 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:56:52.765684 sshd[2176]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:52.768769 systemd[1]: sshd@2-10.200.20.33:22-10.200.16.10:38336.service: Deactivated successfully. Jan 23 23:56:52.770487 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:56:52.772235 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:56:52.773318 systemd-logind[1709]: Removed session 5. Jan 23 23:56:52.859008 systemd[1]: Started sshd@3-10.200.20.33:22-10.200.16.10:38348.service - OpenSSH per-connection server daemon (10.200.16.10:38348). Jan 23 23:56:53.347360 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 38348 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:53.348674 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:53.352404 systemd-logind[1709]: New session 6 of user core. Jan 23 23:56:53.359561 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:56:53.699606 sshd[2183]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:53.702811 systemd[1]: sshd@3-10.200.20.33:22-10.200.16.10:38348.service: Deactivated successfully. Jan 23 23:56:53.704288 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:56:53.704964 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:56:53.705933 systemd-logind[1709]: Removed session 6. Jan 23 23:56:53.768381 systemd[1]: Started sshd@4-10.200.20.33:22-10.200.16.10:38362.service - OpenSSH per-connection server daemon (10.200.16.10:38362). Jan 23 23:56:54.180757 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 38362 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:54.182094 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:54.187223 systemd-logind[1709]: New session 7 of user core. Jan 23 23:56:54.192601 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:56:54.579858 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:56:54.580159 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:54.594217 sudo[2193]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:54.665506 sshd[2190]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:54.669400 systemd[1]: sshd@4-10.200.20.33:22-10.200.16.10:38362.service: Deactivated successfully. Jan 23 23:56:54.671340 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:56:54.672308 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:56:54.673246 systemd-logind[1709]: Removed session 7. Jan 23 23:56:54.764672 systemd[1]: Started sshd@5-10.200.20.33:22-10.200.16.10:38364.service - OpenSSH per-connection server daemon (10.200.16.10:38364). Jan 23 23:56:55.248028 sshd[2198]: Accepted publickey for core from 10.200.16.10 port 38364 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:55.249473 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:55.253233 systemd-logind[1709]: New session 8 of user core. Jan 23 23:56:55.263573 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:56:55.523361 sudo[2202]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:56:55.523658 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:55.526982 sudo[2202]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:55.531753 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:56:55.532007 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:55.545645 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:55.546830 auditctl[2205]: No rules Jan 23 23:56:55.547255 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:56:55.547406 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:55.550062 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:55.572340 augenrules[2223]: No rules Jan 23 23:56:55.574000 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:55.575653 sudo[2201]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:55.653813 sshd[2198]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:55.657286 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:56:55.658023 systemd[1]: sshd@5-10.200.20.33:22-10.200.16.10:38364.service: Deactivated successfully. Jan 23 23:56:55.659604 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:56:55.660400 systemd-logind[1709]: Removed session 8. Jan 23 23:56:55.720870 systemd[1]: Started sshd@6-10.200.20.33:22-10.200.16.10:38376.service - OpenSSH per-connection server daemon (10.200.16.10:38376). Jan 23 23:56:56.129368 sshd[2231]: Accepted publickey for core from 10.200.16.10 port 38376 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:56.130878 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:56.134784 systemd-logind[1709]: New session 9 of user core. Jan 23 23:56:56.141578 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:56:56.365209 sudo[2234]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:56:56.365816 sudo[2234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:57.526668 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:56:57.526839 (dockerd)[2249]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:56:58.039753 dockerd[2249]: time="2026-01-23T23:56:58.039511870Z" level=info msg="Starting up" Jan 23 23:56:58.438312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:56:58.444596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:58.640137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:58.644014 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:58.677869 kubelet[2276]: E0123 23:56:58.677821 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:58.680779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:58.681044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:59.014717 dockerd[2249]: time="2026-01-23T23:56:59.014497773Z" level=info msg="Loading containers: start." Jan 23 23:56:59.150435 kernel: Initializing XFRM netlink socket Jan 23 23:56:59.326246 systemd-networkd[1363]: docker0: Link UP Jan 23 23:56:59.352039 dockerd[2249]: time="2026-01-23T23:56:59.351486782Z" level=info msg="Loading containers: done." Jan 23 23:56:59.364217 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2002160598-merged.mount: Deactivated successfully. Jan 23 23:56:59.368752 dockerd[2249]: time="2026-01-23T23:56:59.368707386Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:56:59.368865 dockerd[2249]: time="2026-01-23T23:56:59.368846986Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:56:59.369002 dockerd[2249]: time="2026-01-23T23:56:59.368985265Z" level=info msg="Daemon has completed initialization" Jan 23 23:56:59.448439 dockerd[2249]: time="2026-01-23T23:56:59.447998859Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:56:59.448811 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:57:00.128924 containerd[1738]: time="2026-01-23T23:57:00.128868942Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 23:57:00.978504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817242849.mount: Deactivated successfully. Jan 23 23:57:02.101450 containerd[1738]: time="2026-01-23T23:57:02.101248102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:02.103834 containerd[1738]: time="2026-01-23T23:57:02.103609214Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 23 23:57:02.106363 containerd[1738]: time="2026-01-23T23:57:02.106318685Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:02.110440 containerd[1738]: time="2026-01-23T23:57:02.110377592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:02.111503 containerd[1738]: time="2026-01-23T23:57:02.111472508Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.982565446s" Jan 23 23:57:02.111738 containerd[1738]: time="2026-01-23T23:57:02.111599708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 23:57:02.112662 containerd[1738]: time="2026-01-23T23:57:02.112638024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 23:57:03.332225 containerd[1738]: time="2026-01-23T23:57:03.332167264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:03.334857 containerd[1738]: time="2026-01-23T23:57:03.334612936Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 23 23:57:03.337625 containerd[1738]: time="2026-01-23T23:57:03.337579926Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:03.342590 containerd[1738]: time="2026-01-23T23:57:03.342539389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:03.344208 containerd[1738]: time="2026-01-23T23:57:03.343796745Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.231125601s" Jan 23 23:57:03.344208 containerd[1738]: time="2026-01-23T23:57:03.343833025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 23:57:03.344679 containerd[1738]: time="2026-01-23T23:57:03.344514743Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 23:57:04.306445 containerd[1738]: time="2026-01-23T23:57:04.305861807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:04.308432 containerd[1738]: time="2026-01-23T23:57:04.308204119Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 23 23:57:04.311164 containerd[1738]: time="2026-01-23T23:57:04.311118509Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:04.316085 containerd[1738]: time="2026-01-23T23:57:04.315717814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:04.316817 containerd[1738]: time="2026-01-23T23:57:04.316786210Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 972.226387ms" Jan 23 23:57:04.316872 containerd[1738]: time="2026-01-23T23:57:04.316817410Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 23:57:04.317321 containerd[1738]: time="2026-01-23T23:57:04.317298088Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 23:57:05.264343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844858568.mount: Deactivated successfully. Jan 23 23:57:05.528539 containerd[1738]: time="2026-01-23T23:57:05.527922798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:05.530405 containerd[1738]: time="2026-01-23T23:57:05.530372030Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 23 23:57:05.533069 containerd[1738]: time="2026-01-23T23:57:05.533043661Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:05.536615 containerd[1738]: time="2026-01-23T23:57:05.536561289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:05.537228 containerd[1738]: time="2026-01-23T23:57:05.537042648Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.218942202s" Jan 23 23:57:05.537228 containerd[1738]: time="2026-01-23T23:57:05.537093687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 23:57:05.537563 containerd[1738]: time="2026-01-23T23:57:05.537510206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 23:57:06.714279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2512274334.mount: Deactivated successfully. Jan 23 23:57:07.633744 containerd[1738]: time="2026-01-23T23:57:07.632609384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:07.634985 containerd[1738]: time="2026-01-23T23:57:07.634953419Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 23 23:57:07.637514 containerd[1738]: time="2026-01-23T23:57:07.637451733Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:07.642819 containerd[1738]: time="2026-01-23T23:57:07.642297442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:07.643665 containerd[1738]: time="2026-01-23T23:57:07.643617119Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.106071713s" Jan 23 23:57:07.643750 containerd[1738]: time="2026-01-23T23:57:07.643670599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 23:57:07.644189 containerd[1738]: time="2026-01-23T23:57:07.644149398Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 23:57:08.202477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2679308717.mount: Deactivated successfully. Jan 23 23:57:08.222440 containerd[1738]: time="2026-01-23T23:57:08.222359210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:08.224883 containerd[1738]: time="2026-01-23T23:57:08.224703125Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 23 23:57:08.230911 containerd[1738]: time="2026-01-23T23:57:08.230864991Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:08.238682 containerd[1738]: time="2026-01-23T23:57:08.238638054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:08.239371 containerd[1738]: time="2026-01-23T23:57:08.239234452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 594.126216ms" Jan 23 23:57:08.239371 containerd[1738]: time="2026-01-23T23:57:08.239282292Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 23:57:08.240244 containerd[1738]: time="2026-01-23T23:57:08.240174890Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 23:57:08.688391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 23:57:08.694796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:08.888450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:08.896663 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:57:08.931498 kubelet[2537]: E0123 23:57:08.931449 2537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:57:08.933978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:57:08.934115 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:57:09.270406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356278121.mount: Deactivated successfully. Jan 23 23:57:09.689429 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 23:57:10.703649 update_engine[1713]: I20260123 23:57:10.703580 1713 update_attempter.cc:509] Updating boot flags... Jan 23 23:57:11.196445 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2602) Jan 23 23:57:11.279441 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2606) Jan 23 23:57:11.415494 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2606) Jan 23 23:57:12.368925 containerd[1738]: time="2026-01-23T23:57:12.368865476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:12.371337 containerd[1738]: time="2026-01-23T23:57:12.371308109Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 23 23:57:12.374845 containerd[1738]: time="2026-01-23T23:57:12.374794898Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:12.380373 containerd[1738]: time="2026-01-23T23:57:12.379884161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:12.381224 containerd[1738]: time="2026-01-23T23:57:12.381189797Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.140975627s" Jan 23 23:57:12.381224 containerd[1738]: time="2026-01-23T23:57:12.381223477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 23:57:18.158966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:18.171606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:18.203986 systemd[1]: Reloading requested from client PID 2717 ('systemctl') (unit session-9.scope)... Jan 23 23:57:18.204003 systemd[1]: Reloading... Jan 23 23:57:18.326433 zram_generator::config[2757]: No configuration found. Jan 23 23:57:18.427668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:57:18.517679 systemd[1]: Reloading finished in 313 ms. Jan 23 23:57:18.559579 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:57:18.559653 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:57:18.559862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:18.562751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:18.728434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:18.738691 (kubelet)[2825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:57:18.773899 kubelet[2825]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:57:18.773899 kubelet[2825]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:57:18.860801 kubelet[2825]: I0123 23:57:18.860393 2825 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:57:19.579757 kubelet[2825]: I0123 23:57:19.579717 2825 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 23:57:19.579757 kubelet[2825]: I0123 23:57:19.579748 2825 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:57:19.581096 kubelet[2825]: I0123 23:57:19.581074 2825 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 23:57:19.581135 kubelet[2825]: I0123 23:57:19.581097 2825 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:57:19.581375 kubelet[2825]: I0123 23:57:19.581360 2825 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:57:19.590465 kubelet[2825]: E0123 23:57:19.590400 2825 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:57:19.591435 kubelet[2825]: I0123 23:57:19.591290 2825 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:57:19.595441 kubelet[2825]: E0123 23:57:19.595395 2825 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:57:19.595612 kubelet[2825]: I0123 23:57:19.595600 2825 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 23 23:57:19.598434 kubelet[2825]: I0123 23:57:19.598398 2825 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 23:57:19.598620 kubelet[2825]: I0123 23:57:19.598590 2825 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:57:19.598756 kubelet[2825]: I0123 23:57:19.598618 2825 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-31deed6810","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:57:19.598841 kubelet[2825]: I0123 23:57:19.598757 2825 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:57:19.598841 kubelet[2825]: I0123 23:57:19.598765 2825 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 23:57:19.598884 kubelet[2825]: I0123 23:57:19.598858 2825 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 23:57:19.602857 kubelet[2825]: I0123 23:57:19.602839 2825 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:57:19.604016 kubelet[2825]: I0123 23:57:19.603990 2825 kubelet.go:475] "Attempting to sync node with API server" Jan 23 23:57:19.604016 kubelet[2825]: I0123 23:57:19.604013 2825 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:57:19.606526 kubelet[2825]: I0123 23:57:19.604040 2825 kubelet.go:387] "Adding apiserver pod source" Jan 23 23:57:19.606526 kubelet[2825]: I0123 23:57:19.604051 2825 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:57:19.606526 kubelet[2825]: E0123 23:57:19.605225 2825 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:57:19.606526 kubelet[2825]: E0123 23:57:19.605916 2825 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-31deed6810&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:57:19.607182 kubelet[2825]: I0123 23:57:19.607165 2825 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:57:19.608137 kubelet[2825]: I0123 23:57:19.608118 2825 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:57:19.608250 kubelet[2825]: I0123 23:57:19.608239 2825 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 23:57:19.608351 kubelet[2825]: W0123 23:57:19.608331 2825 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:57:19.613091 kubelet[2825]: I0123 23:57:19.613074 2825 server.go:1262] "Started kubelet" Jan 23 23:57:19.614100 kubelet[2825]: I0123 23:57:19.614058 2825 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:57:19.614675 kubelet[2825]: I0123 23:57:19.614643 2825 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:57:19.615017 kubelet[2825]: I0123 23:57:19.615000 2825 server.go:310] "Adding debug handlers to kubelet server" Jan 23 23:57:19.618094 kubelet[2825]: I0123 23:57:19.618039 2825 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:57:19.618232 kubelet[2825]: I0123 23:57:19.618218 2825 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 23:57:19.618522 kubelet[2825]: I0123 23:57:19.618509 2825 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:57:19.622399 kubelet[2825]: I0123 23:57:19.622373 2825 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:57:19.623028 kubelet[2825]: I0123 23:57:19.623002 2825 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 23:57:19.623483 kubelet[2825]: E0123 23:57:19.623189 2825 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-31deed6810\" not found" Jan 23 23:57:19.624905 kubelet[2825]: E0123 23:57:19.624865 2825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-31deed6810?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="200ms" Jan 23 23:57:19.625972 kubelet[2825]: I0123 23:57:19.625949 2825 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 23:57:19.626039 kubelet[2825]: I0123 23:57:19.626009 2825 reconciler.go:29] "Reconciler: start to sync state" Jan 23 23:57:19.626143 kubelet[2825]: E0123 23:57:19.625047 2825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-31deed6810.188d8185a4444230 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-31deed6810,UID:ci-4081.3.6-n-31deed6810,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-31deed6810,},FirstTimestamp:2026-01-23 23:57:19.613043248 +0000 UTC m=+0.871452305,LastTimestamp:2026-01-23 23:57:19.613043248 +0000 UTC m=+0.871452305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-31deed6810,}" Jan 23 23:57:19.626777 kubelet[2825]: I0123 23:57:19.626752 2825 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:57:19.626921 kubelet[2825]: I0123 23:57:19.626905 2825 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:57:19.628707 kubelet[2825]: I0123 23:57:19.628689 2825 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:57:19.640142 kubelet[2825]: E0123 23:57:19.639725 2825 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:57:19.641541 kubelet[2825]: E0123 23:57:19.640573 2825 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:57:19.663454 kubelet[2825]: I0123 23:57:19.663404 2825 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 23:57:19.664909 kubelet[2825]: I0123 23:57:19.664885 2825 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 23:57:19.665214 kubelet[2825]: I0123 23:57:19.665201 2825 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 23:57:19.665315 kubelet[2825]: I0123 23:57:19.665306 2825 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 23:57:19.665509 kubelet[2825]: E0123 23:57:19.665398 2825 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:57:19.666576 kubelet[2825]: E0123 23:57:19.666552 2825 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:57:19.680994 kubelet[2825]: I0123 23:57:19.680973 2825 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:57:19.681122 kubelet[2825]: I0123 23:57:19.681109 2825 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:57:19.681206 kubelet[2825]: I0123 23:57:19.681197 2825 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:57:19.685335 kubelet[2825]: I0123 23:57:19.685313 2825 policy_none.go:49] "None policy: Start" Jan 23 23:57:19.685501 kubelet[2825]: I0123 23:57:19.685490 2825 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 23:57:19.685579 kubelet[2825]: I0123 23:57:19.685570 2825 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 23:57:19.690001 kubelet[2825]: I0123 23:57:19.689982 2825 policy_none.go:47] "Start" Jan 23 23:57:19.693460 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:57:19.703725 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:57:19.706605 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:57:19.717283 kubelet[2825]: E0123 23:57:19.717250 2825 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:57:19.717872 kubelet[2825]: I0123 23:57:19.717463 2825 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:57:19.717872 kubelet[2825]: I0123 23:57:19.717479 2825 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:57:19.717872 kubelet[2825]: I0123 23:57:19.717691 2825 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:57:19.719506 kubelet[2825]: E0123 23:57:19.719400 2825 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:57:19.719506 kubelet[2825]: E0123 23:57:19.719454 2825 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-31deed6810\" not found" Jan 23 23:57:19.778959 systemd[1]: Created slice kubepods-burstable-poddc57092940c0cbaefda9c065425e4345.slice - libcontainer container kubepods-burstable-poddc57092940c0cbaefda9c065425e4345.slice. Jan 23 23:57:19.787087 kubelet[2825]: E0123 23:57:19.787056 2825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.791641 systemd[1]: Created slice kubepods-burstable-podaa7418937a7b701981a001d29cd90f8f.slice - libcontainer container kubepods-burstable-podaa7418937a7b701981a001d29cd90f8f.slice. Jan 23 23:57:19.799744 kubelet[2825]: E0123 23:57:19.799716 2825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.802630 systemd[1]: Created slice kubepods-burstable-pod4239a067dc01e17a6a7d22b65833da52.slice - libcontainer container kubepods-burstable-pod4239a067dc01e17a6a7d22b65833da52.slice. Jan 23 23:57:19.804424 kubelet[2825]: E0123 23:57:19.804384 2825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.819879 kubelet[2825]: I0123 23:57:19.819853 2825 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.820328 kubelet[2825]: E0123 23:57:19.820298 2825 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.825719 kubelet[2825]: E0123 23:57:19.825690 2825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-31deed6810?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="400ms" Jan 23 23:57:19.927249 kubelet[2825]: I0123 23:57:19.926977 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.927249 kubelet[2825]: I0123 23:57:19.927015 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.927249 kubelet[2825]: I0123 23:57:19.927031 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc57092940c0cbaefda9c065425e4345-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-31deed6810\" (UID: \"dc57092940c0cbaefda9c065425e4345\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.927249 kubelet[2825]: I0123 23:57:19.927071 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.927249 kubelet[2825]: I0123 23:57:19.927104 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.927468 kubelet[2825]: I0123 23:57:19.927123 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.927468 kubelet[2825]: I0123 23:57:19.927138 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4239a067dc01e17a6a7d22b65833da52-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-31deed6810\" (UID: \"4239a067dc01e17a6a7d22b65833da52\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.927468 kubelet[2825]: I0123 23:57:19.927160 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc57092940c0cbaefda9c065425e4345-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-31deed6810\" (UID: \"dc57092940c0cbaefda9c065425e4345\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:19.927468 kubelet[2825]: I0123 23:57:19.927186 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc57092940c0cbaefda9c065425e4345-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-31deed6810\" (UID: \"dc57092940c0cbaefda9c065425e4345\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:20.022578 kubelet[2825]: I0123 23:57:20.022546 2825 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:20.022905 kubelet[2825]: E0123 23:57:20.022882 2825 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:20.094439 containerd[1738]: time="2026-01-23T23:57:20.094079755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-31deed6810,Uid:dc57092940c0cbaefda9c065425e4345,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:20.105099 containerd[1738]: time="2026-01-23T23:57:20.105062480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-31deed6810,Uid:aa7418937a7b701981a001d29cd90f8f,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:20.109052 containerd[1738]: time="2026-01-23T23:57:20.109010148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-31deed6810,Uid:4239a067dc01e17a6a7d22b65833da52,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:20.227120 kubelet[2825]: E0123 23:57:20.227076 2825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-31deed6810?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="800ms" Jan 23 23:57:20.425548 kubelet[2825]: I0123 23:57:20.425517 2825 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:20.425855 kubelet[2825]: E0123 23:57:20.425823 2825 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:20.650469 kubelet[2825]: E0123 23:57:20.650345 2825 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-31deed6810&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:57:20.667820 kubelet[2825]: E0123 23:57:20.667782 2825 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:57:20.692333 kubelet[2825]: E0123 23:57:20.692292 2825 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:57:20.696032 kubelet[2825]: E0123 23:57:20.695801 2825 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:57:20.702639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103684890.mount: Deactivated successfully. Jan 23 23:57:20.729346 containerd[1738]: time="2026-01-23T23:57:20.729295651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:57:20.731831 containerd[1738]: time="2026-01-23T23:57:20.731795003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:57:20.734904 containerd[1738]: time="2026-01-23T23:57:20.734876674Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:57:20.738440 containerd[1738]: time="2026-01-23T23:57:20.738001744Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:57:20.741831 containerd[1738]: time="2026-01-23T23:57:20.741789012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:57:20.745086 containerd[1738]: time="2026-01-23T23:57:20.744976481Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:57:20.748966 containerd[1738]: time="2026-01-23T23:57:20.748085671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:57:20.748966 containerd[1738]: time="2026-01-23T23:57:20.748944269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 654.788154ms" Jan 23 23:57:20.750390 containerd[1738]: time="2026-01-23T23:57:20.750261785Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:57:20.759527 containerd[1738]: time="2026-01-23T23:57:20.759493755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 654.218876ms" Jan 23 23:57:20.775853 containerd[1738]: time="2026-01-23T23:57:20.775807063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 666.728916ms" Jan 23 23:57:21.028008 kubelet[2825]: E0123 23:57:21.027964 2825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-31deed6810?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="1.6s" Jan 23 23:57:21.227829 kubelet[2825]: I0123 23:57:21.227788 2825 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:21.228128 kubelet[2825]: E0123 23:57:21.228099 2825 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:21.409889 containerd[1738]: time="2026-01-23T23:57:21.409460564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:21.411054 containerd[1738]: time="2026-01-23T23:57:21.410933440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:21.411054 containerd[1738]: time="2026-01-23T23:57:21.410963040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:21.411285 containerd[1738]: time="2026-01-23T23:57:21.411075279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:21.414676 containerd[1738]: time="2026-01-23T23:57:21.414597588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:21.414855 containerd[1738]: time="2026-01-23T23:57:21.414828947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:21.414981 containerd[1738]: time="2026-01-23T23:57:21.414957307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:21.415098 containerd[1738]: time="2026-01-23T23:57:21.415026667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:21.415179 containerd[1738]: time="2026-01-23T23:57:21.415093426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:21.415179 containerd[1738]: time="2026-01-23T23:57:21.415113226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:21.415278 containerd[1738]: time="2026-01-23T23:57:21.415193106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:21.415459 containerd[1738]: time="2026-01-23T23:57:21.415405465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:21.440624 systemd[1]: Started cri-containerd-3db071eef311ff4b9fb7345dffd32bedd028f6a1d909def543a4b2f5523da18e.scope - libcontainer container 3db071eef311ff4b9fb7345dffd32bedd028f6a1d909def543a4b2f5523da18e. Jan 23 23:57:21.442562 systemd[1]: Started cri-containerd-7eb128e8628e16e21ad48fd2939217ae1c76a1c54cdabcde0f40334ec21206c4.scope - libcontainer container 7eb128e8628e16e21ad48fd2939217ae1c76a1c54cdabcde0f40334ec21206c4. Jan 23 23:57:21.452884 systemd[1]: Started cri-containerd-6804a448b1657a67bfc395154775f88e54cfc2509011210dec54e965e0ef4fa2.scope - libcontainer container 6804a448b1657a67bfc395154775f88e54cfc2509011210dec54e965e0ef4fa2. Jan 23 23:57:21.490517 containerd[1738]: time="2026-01-23T23:57:21.490306827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-31deed6810,Uid:4239a067dc01e17a6a7d22b65833da52,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eb128e8628e16e21ad48fd2939217ae1c76a1c54cdabcde0f40334ec21206c4\"" Jan 23 23:57:21.494954 containerd[1738]: time="2026-01-23T23:57:21.494679013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-31deed6810,Uid:dc57092940c0cbaefda9c065425e4345,Namespace:kube-system,Attempt:0,} returns sandbox id \"3db071eef311ff4b9fb7345dffd32bedd028f6a1d909def543a4b2f5523da18e\"" Jan 23 23:57:21.506135 containerd[1738]: time="2026-01-23T23:57:21.505686498Z" level=info msg="CreateContainer within sandbox \"3db071eef311ff4b9fb7345dffd32bedd028f6a1d909def543a4b2f5523da18e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:57:21.510690 containerd[1738]: time="2026-01-23T23:57:21.510565562Z" level=info msg="CreateContainer within sandbox \"7eb128e8628e16e21ad48fd2939217ae1c76a1c54cdabcde0f40334ec21206c4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:57:21.518806 containerd[1738]: time="2026-01-23T23:57:21.518767376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-31deed6810,Uid:aa7418937a7b701981a001d29cd90f8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6804a448b1657a67bfc395154775f88e54cfc2509011210dec54e965e0ef4fa2\"" Jan 23 23:57:21.537552 containerd[1738]: time="2026-01-23T23:57:21.537513036Z" level=info msg="CreateContainer within sandbox \"6804a448b1657a67bfc395154775f88e54cfc2509011210dec54e965e0ef4fa2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:57:21.571008 containerd[1738]: time="2026-01-23T23:57:21.570961815Z" level=info msg="CreateContainer within sandbox \"3db071eef311ff4b9fb7345dffd32bedd028f6a1d909def543a4b2f5523da18e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db7e08b089d7ad22d3c974bd3add9fb53961cff1ac7c993145a32b27fe3c4df3\"" Jan 23 23:57:21.572346 containerd[1738]: time="2026-01-23T23:57:21.571615574Z" level=info msg="StartContainer for \"db7e08b089d7ad22d3c974bd3add9fb53961cff1ac7c993145a32b27fe3c4df3\"" Jan 23 23:57:21.592521 containerd[1738]: time="2026-01-23T23:57:21.592481324Z" level=info msg="CreateContainer within sandbox \"7eb128e8628e16e21ad48fd2939217ae1c76a1c54cdabcde0f40334ec21206c4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0e4f6fcbb98669b771a6b5dbb78aacca6a2cd2970eef384ab39cbfda7b4646e\"" Jan 23 23:57:21.593395 containerd[1738]: time="2026-01-23T23:57:21.593364962Z" level=info msg="StartContainer for \"a0e4f6fcbb98669b771a6b5dbb78aacca6a2cd2970eef384ab39cbfda7b4646e\"" Jan 23 23:57:21.598641 containerd[1738]: time="2026-01-23T23:57:21.598608310Z" level=info msg="CreateContainer within sandbox \"6804a448b1657a67bfc395154775f88e54cfc2509011210dec54e965e0ef4fa2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"10e7bceb028834aa7f984d64c51faac34704e03a11ea089cf9b9cf5a9596c4c6\"" Jan 23 23:57:21.598927 systemd[1]: Started cri-containerd-db7e08b089d7ad22d3c974bd3add9fb53961cff1ac7c993145a32b27fe3c4df3.scope - libcontainer container db7e08b089d7ad22d3c974bd3add9fb53961cff1ac7c993145a32b27fe3c4df3. Jan 23 23:57:21.599444 containerd[1738]: time="2026-01-23T23:57:21.599371028Z" level=info msg="StartContainer for \"10e7bceb028834aa7f984d64c51faac34704e03a11ea089cf9b9cf5a9596c4c6\"" Jan 23 23:57:21.634863 systemd[1]: Started cri-containerd-a0e4f6fcbb98669b771a6b5dbb78aacca6a2cd2970eef384ab39cbfda7b4646e.scope - libcontainer container a0e4f6fcbb98669b771a6b5dbb78aacca6a2cd2970eef384ab39cbfda7b4646e. Jan 23 23:57:21.654763 systemd[1]: Started cri-containerd-10e7bceb028834aa7f984d64c51faac34704e03a11ea089cf9b9cf5a9596c4c6.scope - libcontainer container 10e7bceb028834aa7f984d64c51faac34704e03a11ea089cf9b9cf5a9596c4c6. Jan 23 23:57:21.665416 containerd[1738]: time="2026-01-23T23:57:21.665227512Z" level=info msg="StartContainer for \"db7e08b089d7ad22d3c974bd3add9fb53961cff1ac7c993145a32b27fe3c4df3\" returns successfully" Jan 23 23:57:21.686391 kubelet[2825]: E0123 23:57:21.686277 2825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:21.732619 containerd[1738]: time="2026-01-23T23:57:21.732097474Z" level=info msg="StartContainer for \"10e7bceb028834aa7f984d64c51faac34704e03a11ea089cf9b9cf5a9596c4c6\" returns successfully" Jan 23 23:57:21.741146 containerd[1738]: time="2026-01-23T23:57:21.741101853Z" level=info msg="StartContainer for \"a0e4f6fcbb98669b771a6b5dbb78aacca6a2cd2970eef384ab39cbfda7b4646e\" returns successfully" Jan 23 23:57:22.692187 kubelet[2825]: E0123 23:57:22.692162 2825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:22.696668 kubelet[2825]: E0123 23:57:22.695667 2825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:22.696930 kubelet[2825]: E0123 23:57:22.696780 2825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:22.831021 kubelet[2825]: I0123 23:57:22.830990 2825 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.609930 kubelet[2825]: I0123 23:57:23.609902 2825 apiserver.go:52] "Watching apiserver" Jan 23 23:57:23.626233 kubelet[2825]: I0123 23:57:23.626174 2825 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 23:57:23.699027 kubelet[2825]: E0123 23:57:23.698903 2825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.699027 kubelet[2825]: E0123 23:57:23.698951 2825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.714154 kubelet[2825]: E0123 23:57:23.714109 2825 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-31deed6810\" not found" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.764297 kubelet[2825]: I0123 23:57:23.764139 2825 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.764297 kubelet[2825]: E0123 23:57:23.764173 2825 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-31deed6810\": node \"ci-4081.3.6-n-31deed6810\" not found" Jan 23 23:57:23.824154 kubelet[2825]: I0123 23:57:23.824118 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.830535 kubelet[2825]: E0123 23:57:23.830502 2825 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-31deed6810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.830535 kubelet[2825]: I0123 23:57:23.830532 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.832434 kubelet[2825]: E0123 23:57:23.832390 2825 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.832434 kubelet[2825]: I0123 23:57:23.832420 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.834068 kubelet[2825]: E0123 23:57:23.834039 2825 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-31deed6810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.844536 kubelet[2825]: I0123 23:57:23.844504 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:23.848821 kubelet[2825]: E0123 23:57:23.848577 2825 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-31deed6810\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:24.700209 kubelet[2825]: I0123 23:57:24.700035 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" Jan 23 23:57:24.709326 kubelet[2825]: I0123 23:57:24.709081 2825 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 23:57:25.803943 systemd[1]: Reloading requested from client PID 3111 ('systemctl') (unit session-9.scope)... Jan 23 23:57:25.803956 systemd[1]: Reloading... Jan 23 23:57:25.907444 zram_generator::config[3154]: No configuration found. Jan 23 23:57:26.020984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:57:26.127508 systemd[1]: Reloading finished in 323 ms. Jan 23 23:57:26.165928 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:26.174607 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:57:26.174839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:26.174896 systemd[1]: kubelet.service: Consumed 1.140s CPU time, 121.6M memory peak, 0B memory swap peak. Jan 23 23:57:26.182035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:26.282972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:26.292175 (kubelet)[3215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:57:26.332642 kubelet[3215]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:57:26.332962 kubelet[3215]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:57:26.333107 kubelet[3215]: I0123 23:57:26.333078 3215 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:57:26.338746 kubelet[3215]: I0123 23:57:26.338717 3215 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 23:57:26.339476 kubelet[3215]: I0123 23:57:26.338866 3215 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:57:26.339476 kubelet[3215]: I0123 23:57:26.338902 3215 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 23:57:26.339476 kubelet[3215]: I0123 23:57:26.338909 3215 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:57:26.339476 kubelet[3215]: I0123 23:57:26.339121 3215 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:57:26.340319 kubelet[3215]: I0123 23:57:26.340304 3215 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 23:57:26.344203 kubelet[3215]: I0123 23:57:26.344067 3215 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:57:26.347069 kubelet[3215]: E0123 23:57:26.347044 3215 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:57:26.347232 kubelet[3215]: I0123 23:57:26.347215 3215 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 23 23:57:26.350033 kubelet[3215]: I0123 23:57:26.349958 3215 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 23:57:26.350429 kubelet[3215]: I0123 23:57:26.350272 3215 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:57:26.350515 kubelet[3215]: I0123 23:57:26.350297 3215 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-31deed6810","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:57:26.350628 kubelet[3215]: I0123 23:57:26.350617 3215 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:57:26.350682 kubelet[3215]: I0123 23:57:26.350675 3215 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 23:57:26.350757 kubelet[3215]: I0123 23:57:26.350748 3215 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 23:57:26.351653 kubelet[3215]: I0123 23:57:26.351636 3215 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:57:26.351873 kubelet[3215]: I0123 23:57:26.351863 3215 kubelet.go:475] "Attempting to sync node with API server" Jan 23 23:57:26.352028 kubelet[3215]: I0123 23:57:26.351930 3215 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:57:26.352028 kubelet[3215]: I0123 23:57:26.351964 3215 kubelet.go:387] "Adding apiserver pod source" Jan 23 23:57:26.352028 kubelet[3215]: I0123 23:57:26.351974 3215 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:57:26.355823 kubelet[3215]: I0123 23:57:26.355652 3215 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:57:26.357467 kubelet[3215]: I0123 23:57:26.357453 3215 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:57:26.357596 kubelet[3215]: I0123 23:57:26.357543 3215 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 23:57:26.366527 kubelet[3215]: I0123 23:57:26.366464 3215 server.go:1262] "Started kubelet" Jan 23 23:57:26.372448 kubelet[3215]: I0123 23:57:26.372324 3215 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:57:26.383109 kubelet[3215]: I0123 23:57:26.382977 3215 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:57:26.385244 kubelet[3215]: I0123 23:57:26.385208 3215 server.go:310] "Adding debug handlers to kubelet server" Jan 23 23:57:26.392512 kubelet[3215]: I0123 23:57:26.391993 3215 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:57:26.392512 kubelet[3215]: I0123 23:57:26.392068 3215 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 23:57:26.392512 kubelet[3215]: I0123 23:57:26.392223 3215 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:57:26.395000 kubelet[3215]: I0123 23:57:26.393936 3215 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:57:26.401446 kubelet[3215]: I0123 23:57:26.401418 3215 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 23:57:26.402440 kubelet[3215]: E0123 23:57:26.402405 3215 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-31deed6810\" not found" Jan 23 23:57:26.405236 kubelet[3215]: I0123 23:57:26.404921 3215 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 23:57:26.406143 kubelet[3215]: I0123 23:57:26.406048 3215 reconciler.go:29] "Reconciler: start to sync state" Jan 23 23:57:26.420048 kubelet[3215]: I0123 23:57:26.417952 3215 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:57:26.420048 kubelet[3215]: I0123 23:57:26.419931 3215 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:57:26.424209 kubelet[3215]: E0123 23:57:26.424183 3215 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:57:26.426247 kubelet[3215]: I0123 23:57:26.426060 3215 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:57:26.430985 kubelet[3215]: I0123 23:57:26.428501 3215 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 23:57:26.430985 kubelet[3215]: I0123 23:57:26.429507 3215 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 23:57:26.430985 kubelet[3215]: I0123 23:57:26.429525 3215 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 23:57:26.430985 kubelet[3215]: I0123 23:57:26.429551 3215 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 23:57:26.430985 kubelet[3215]: E0123 23:57:26.429592 3215 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496532 3215 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496555 3215 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496584 3215 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496731 3215 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496741 3215 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496756 3215 policy_none.go:49] "None policy: Start" Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496765 3215 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496773 3215 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496892 3215 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 23:57:26.497470 kubelet[3215]: I0123 23:57:26.496899 3215 policy_none.go:47] "Start" Jan 23 23:57:26.507230 kubelet[3215]: E0123 23:57:26.507204 3215 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:57:26.507406 kubelet[3215]: I0123 23:57:26.507385 3215 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:57:26.507468 kubelet[3215]: I0123 23:57:26.507404 3215 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:57:26.507697 kubelet[3215]: I0123 23:57:26.507670 3215 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:57:26.513404 kubelet[3215]: E0123 23:57:26.513368 3215 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:57:26.531785 kubelet[3215]: I0123 23:57:26.531720 3215 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.534005 kubelet[3215]: I0123 23:57:26.533160 3215 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.534005 kubelet[3215]: I0123 23:57:26.533374 3215 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.545230 kubelet[3215]: I0123 23:57:26.544997 3215 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 23:57:26.548698 kubelet[3215]: I0123 23:57:26.548509 3215 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 23:57:26.549261 kubelet[3215]: I0123 23:57:26.548966 3215 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 23:57:26.550434 kubelet[3215]: E0123 23:57:26.549476 3215 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-31deed6810\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.607344 kubelet[3215]: I0123 23:57:26.607269 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc57092940c0cbaefda9c065425e4345-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-31deed6810\" (UID: \"dc57092940c0cbaefda9c065425e4345\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.607716 kubelet[3215]: I0123 23:57:26.607529 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.607716 kubelet[3215]: I0123 23:57:26.607554 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.607716 kubelet[3215]: I0123 23:57:26.607571 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.607716 kubelet[3215]: I0123 23:57:26.607590 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc57092940c0cbaefda9c065425e4345-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-31deed6810\" (UID: \"dc57092940c0cbaefda9c065425e4345\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.607716 kubelet[3215]: I0123 23:57:26.607609 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.607859 kubelet[3215]: I0123 23:57:26.607654 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa7418937a7b701981a001d29cd90f8f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-31deed6810\" (UID: \"aa7418937a7b701981a001d29cd90f8f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.607859 kubelet[3215]: I0123 23:57:26.607674 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4239a067dc01e17a6a7d22b65833da52-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-31deed6810\" (UID: \"4239a067dc01e17a6a7d22b65833da52\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.607859 kubelet[3215]: I0123 23:57:26.607689 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc57092940c0cbaefda9c065425e4345-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-31deed6810\" (UID: \"dc57092940c0cbaefda9c065425e4345\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.617327 kubelet[3215]: I0123 23:57:26.617285 3215 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.627343 kubelet[3215]: I0123 23:57:26.627313 3215 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:26.627498 kubelet[3215]: I0123 23:57:26.627400 3215 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-31deed6810" Jan 23 23:57:27.356360 kubelet[3215]: I0123 23:57:27.356117 3215 apiserver.go:52] "Watching apiserver" Jan 23 23:57:27.406456 kubelet[3215]: I0123 23:57:27.406381 3215 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 23:57:27.472616 kubelet[3215]: I0123 23:57:27.472449 3215 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" Jan 23 23:57:27.479522 kubelet[3215]: I0123 23:57:27.479492 3215 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 23:57:27.479743 kubelet[3215]: E0123 23:57:27.479648 3215 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-31deed6810\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" Jan 23 23:57:27.509685 kubelet[3215]: I0123 23:57:27.509440 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-31deed6810" podStartSLOduration=1.509422667 podStartE2EDuration="1.509422667s" podCreationTimestamp="2026-01-23 23:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:27.494926312 +0000 UTC m=+1.198573071" watchObservedRunningTime="2026-01-23 23:57:27.509422667 +0000 UTC m=+1.213069466" Jan 23 23:57:27.523124 kubelet[3215]: I0123 23:57:27.522919 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-31deed6810" podStartSLOduration=1.522904026 podStartE2EDuration="1.522904026s" podCreationTimestamp="2026-01-23 23:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:27.509935146 +0000 UTC m=+1.213581905" watchObservedRunningTime="2026-01-23 23:57:27.522904026 +0000 UTC m=+1.226550745" Jan 23 23:57:27.540172 kubelet[3215]: I0123 23:57:27.539604 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-31deed6810" podStartSLOduration=3.539588974 podStartE2EDuration="3.539588974s" podCreationTimestamp="2026-01-23 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:27.523471944 +0000 UTC m=+1.227118703" watchObservedRunningTime="2026-01-23 23:57:27.539588974 +0000 UTC m=+1.243235733" Jan 23 23:57:31.571933 kubelet[3215]: I0123 23:57:31.571791 3215 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:57:31.572647 containerd[1738]: time="2026-01-23T23:57:31.572551079Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:57:31.573060 kubelet[3215]: I0123 23:57:31.572731 3215 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:57:32.532953 systemd[1]: Created slice kubepods-besteffort-podf0546896_7ac6_4ba9_9a64_0558d357506b.slice - libcontainer container kubepods-besteffort-podf0546896_7ac6_4ba9_9a64_0558d357506b.slice. Jan 23 23:57:32.541457 kubelet[3215]: I0123 23:57:32.541314 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0546896-7ac6-4ba9-9a64-0558d357506b-lib-modules\") pod \"kube-proxy-mfclz\" (UID: \"f0546896-7ac6-4ba9-9a64-0558d357506b\") " pod="kube-system/kube-proxy-mfclz" Jan 23 23:57:32.541457 kubelet[3215]: I0123 23:57:32.541352 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvb5l\" (UniqueName: \"kubernetes.io/projected/f0546896-7ac6-4ba9-9a64-0558d357506b-kube-api-access-dvb5l\") pod \"kube-proxy-mfclz\" (UID: \"f0546896-7ac6-4ba9-9a64-0558d357506b\") " pod="kube-system/kube-proxy-mfclz" Jan 23 23:57:32.541457 kubelet[3215]: I0123 23:57:32.541374 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0546896-7ac6-4ba9-9a64-0558d357506b-kube-proxy\") pod \"kube-proxy-mfclz\" (UID: \"f0546896-7ac6-4ba9-9a64-0558d357506b\") " pod="kube-system/kube-proxy-mfclz" Jan 23 23:57:32.541457 kubelet[3215]: I0123 23:57:32.541389 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0546896-7ac6-4ba9-9a64-0558d357506b-xtables-lock\") pod \"kube-proxy-mfclz\" (UID: \"f0546896-7ac6-4ba9-9a64-0558d357506b\") " pod="kube-system/kube-proxy-mfclz" Jan 23 23:57:32.754605 systemd[1]: Created slice kubepods-besteffort-podeda11ef5_dad1_4fb7_80db_bc8042ab0980.slice - libcontainer container kubepods-besteffort-podeda11ef5_dad1_4fb7_80db_bc8042ab0980.slice. Jan 23 23:57:32.844210 kubelet[3215]: I0123 23:57:32.843788 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eda11ef5-dad1-4fb7-80db-bc8042ab0980-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-5fd86\" (UID: \"eda11ef5-dad1-4fb7-80db-bc8042ab0980\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-5fd86" Jan 23 23:57:32.844210 kubelet[3215]: I0123 23:57:32.843836 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4xmk\" (UniqueName: \"kubernetes.io/projected/eda11ef5-dad1-4fb7-80db-bc8042ab0980-kube-api-access-k4xmk\") pod \"tigera-operator-65cdcdfd6d-5fd86\" (UID: \"eda11ef5-dad1-4fb7-80db-bc8042ab0980\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-5fd86" Jan 23 23:57:32.850264 containerd[1738]: time="2026-01-23T23:57:32.850229549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mfclz,Uid:f0546896-7ac6-4ba9-9a64-0558d357506b,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:32.885967 containerd[1738]: time="2026-01-23T23:57:32.885867917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:32.885967 containerd[1738]: time="2026-01-23T23:57:32.885927197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:32.885967 containerd[1738]: time="2026-01-23T23:57:32.885942117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:32.889455 containerd[1738]: time="2026-01-23T23:57:32.886364835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:32.909599 systemd[1]: Started cri-containerd-c8b73adc075a3cf70fdac231b4fde5a79e39d8c377e17f9d4abb4fb958b484e5.scope - libcontainer container c8b73adc075a3cf70fdac231b4fde5a79e39d8c377e17f9d4abb4fb958b484e5. Jan 23 23:57:32.931789 containerd[1738]: time="2026-01-23T23:57:32.931744333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mfclz,Uid:f0546896-7ac6-4ba9-9a64-0558d357506b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8b73adc075a3cf70fdac231b4fde5a79e39d8c377e17f9d4abb4fb958b484e5\"" Jan 23 23:57:32.949606 containerd[1738]: time="2026-01-23T23:57:32.949566797Z" level=info msg="CreateContainer within sandbox \"c8b73adc075a3cf70fdac231b4fde5a79e39d8c377e17f9d4abb4fb958b484e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:57:32.989383 containerd[1738]: time="2026-01-23T23:57:32.989322152Z" level=info msg="CreateContainer within sandbox \"c8b73adc075a3cf70fdac231b4fde5a79e39d8c377e17f9d4abb4fb958b484e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d871bf599e34dc1fdb67629a54461741d68a4bb2a98b172fef18e267c676313\"" Jan 23 23:57:32.991320 containerd[1738]: time="2026-01-23T23:57:32.991276146Z" level=info msg="StartContainer for \"2d871bf599e34dc1fdb67629a54461741d68a4bb2a98b172fef18e267c676313\"" Jan 23 23:57:33.014598 systemd[1]: Started cri-containerd-2d871bf599e34dc1fdb67629a54461741d68a4bb2a98b172fef18e267c676313.scope - libcontainer container 2d871bf599e34dc1fdb67629a54461741d68a4bb2a98b172fef18e267c676313. Jan 23 23:57:33.043203 containerd[1738]: time="2026-01-23T23:57:33.043076983Z" level=info msg="StartContainer for \"2d871bf599e34dc1fdb67629a54461741d68a4bb2a98b172fef18e267c676313\" returns successfully" Jan 23 23:57:33.063125 containerd[1738]: time="2026-01-23T23:57:33.062666522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-5fd86,Uid:eda11ef5-dad1-4fb7-80db-bc8042ab0980,Namespace:tigera-operator,Attempt:0,}" Jan 23 23:57:33.103175 containerd[1738]: time="2026-01-23T23:57:33.103004515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:33.104115 containerd[1738]: time="2026-01-23T23:57:33.104057992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:33.104242 containerd[1738]: time="2026-01-23T23:57:33.104083992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:33.104491 containerd[1738]: time="2026-01-23T23:57:33.104465111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:33.121852 systemd[1]: Started cri-containerd-f98c1fc6abf77167b08166e92e83feab2a7329e45069c9ea7e017f9f736d8ab1.scope - libcontainer container f98c1fc6abf77167b08166e92e83feab2a7329e45069c9ea7e017f9f736d8ab1. Jan 23 23:57:33.156086 containerd[1738]: time="2026-01-23T23:57:33.155938069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-5fd86,Uid:eda11ef5-dad1-4fb7-80db-bc8042ab0980,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f98c1fc6abf77167b08166e92e83feab2a7329e45069c9ea7e017f9f736d8ab1\"" Jan 23 23:57:33.158499 containerd[1738]: time="2026-01-23T23:57:33.158166302Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 23:57:33.537716 kubelet[3215]: I0123 23:57:33.536675 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mfclz" podStartSLOduration=1.536656874 podStartE2EDuration="1.536656874s" podCreationTimestamp="2026-01-23 23:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:33.536545074 +0000 UTC m=+7.240191833" watchObservedRunningTime="2026-01-23 23:57:33.536656874 +0000 UTC m=+7.240303793" Jan 23 23:57:34.805637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505133209.mount: Deactivated successfully. Jan 23 23:57:35.531451 containerd[1738]: time="2026-01-23T23:57:35.531161853Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:35.533637 containerd[1738]: time="2026-01-23T23:57:35.533598966Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 23:57:35.536565 containerd[1738]: time="2026-01-23T23:57:35.536512557Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:35.543081 containerd[1738]: time="2026-01-23T23:57:35.543025616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:35.544004 containerd[1738]: time="2026-01-23T23:57:35.543866333Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.385643512s" Jan 23 23:57:35.544004 containerd[1738]: time="2026-01-23T23:57:35.543897693Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 23:57:35.550758 containerd[1738]: time="2026-01-23T23:57:35.550716832Z" level=info msg="CreateContainer within sandbox \"f98c1fc6abf77167b08166e92e83feab2a7329e45069c9ea7e017f9f736d8ab1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 23:57:35.572272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount849767088.mount: Deactivated successfully. Jan 23 23:57:35.580025 containerd[1738]: time="2026-01-23T23:57:35.579978820Z" level=info msg="CreateContainer within sandbox \"f98c1fc6abf77167b08166e92e83feab2a7329e45069c9ea7e017f9f736d8ab1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3ec839e740e9c45622ccc36f35e48b96f17f972cd4548a310b69f760455c8e38\"" Jan 23 23:57:35.581728 containerd[1738]: time="2026-01-23T23:57:35.580880897Z" level=info msg="StartContainer for \"3ec839e740e9c45622ccc36f35e48b96f17f972cd4548a310b69f760455c8e38\"" Jan 23 23:57:35.608603 systemd[1]: Started cri-containerd-3ec839e740e9c45622ccc36f35e48b96f17f972cd4548a310b69f760455c8e38.scope - libcontainer container 3ec839e740e9c45622ccc36f35e48b96f17f972cd4548a310b69f760455c8e38. Jan 23 23:57:35.635291 containerd[1738]: time="2026-01-23T23:57:35.635243687Z" level=info msg="StartContainer for \"3ec839e740e9c45622ccc36f35e48b96f17f972cd4548a310b69f760455c8e38\" returns successfully" Jan 23 23:57:36.503822 kubelet[3215]: I0123 23:57:36.503713 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-5fd86" podStartSLOduration=2.116506294 podStartE2EDuration="4.503698561s" podCreationTimestamp="2026-01-23 23:57:32 +0000 UTC" firstStartedPulling="2026-01-23 23:57:33.157581424 +0000 UTC m=+6.861228183" lastFinishedPulling="2026-01-23 23:57:35.544773651 +0000 UTC m=+9.248420450" observedRunningTime="2026-01-23 23:57:36.503480241 +0000 UTC m=+10.207126960" watchObservedRunningTime="2026-01-23 23:57:36.503698561 +0000 UTC m=+10.207345280" Jan 23 23:57:41.385870 sudo[2234]: pam_unix(sudo:session): session closed for user root Jan 23 23:57:41.459031 sshd[2231]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:41.465540 systemd[1]: sshd@6-10.200.20.33:22-10.200.16.10:38376.service: Deactivated successfully. Jan 23 23:57:41.471864 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:57:41.472069 systemd[1]: session-9.scope: Consumed 7.215s CPU time, 150.5M memory peak, 0B memory swap peak. Jan 23 23:57:41.473036 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:57:41.473988 systemd-logind[1709]: Removed session 9. Jan 23 23:57:45.308440 waagent[1911]: 2026-01-23T23:57:45.307589Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 23:57:45.318439 waagent[1911]: 2026-01-23T23:57:45.317384Z INFO ExtHandler Jan 23 23:57:45.318777 waagent[1911]: 2026-01-23T23:57:45.318730Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 91029737-91e6-43dc-b6da-7212e716fea4 eTag: 652421328817663485 source: Fabric] Jan 23 23:57:45.319277 waagent[1911]: 2026-01-23T23:57:45.319232Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:57:45.320144 waagent[1911]: 2026-01-23T23:57:45.320088Z INFO ExtHandler Jan 23 23:57:45.320323 waagent[1911]: 2026-01-23T23:57:45.320288Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 23:57:45.420977 waagent[1911]: 2026-01-23T23:57:45.419907Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:57:45.518022 waagent[1911]: 2026-01-23T23:57:45.517161Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F2A65AB462B295254FBA0EC237849B2CB08B0F58', 'hasPrivateKey': True} Jan 23 23:57:45.518022 waagent[1911]: 2026-01-23T23:57:45.517872Z INFO ExtHandler Fetch goal state completed Jan 23 23:57:45.519726 waagent[1911]: 2026-01-23T23:57:45.519673Z INFO ExtHandler ExtHandler Jan 23 23:57:45.522719 waagent[1911]: 2026-01-23T23:57:45.520059Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: e0254cd3-e610-48b5-8310-cfff7f934667 correlation 75a69c7e-8ae0-4ef3-bb01-fb6d8b07e912 created: 2026-01-23T23:57:37.556572Z] Jan 23 23:57:45.522719 waagent[1911]: 2026-01-23T23:57:45.521858Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:57:45.522719 waagent[1911]: 2026-01-23T23:57:45.522367Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 2 ms] Jan 23 23:57:51.132278 systemd[1]: Created slice kubepods-besteffort-pod51be87b1_830d_4d58_98b9_70ecf9942921.slice - libcontainer container kubepods-besteffort-pod51be87b1_830d_4d58_98b9_70ecf9942921.slice. Jan 23 23:57:51.260881 kubelet[3215]: I0123 23:57:51.260828 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hf5t\" (UniqueName: \"kubernetes.io/projected/51be87b1-830d-4d58-98b9-70ecf9942921-kube-api-access-6hf5t\") pod \"calico-typha-7d95f5fc4c-smdks\" (UID: \"51be87b1-830d-4d58-98b9-70ecf9942921\") " pod="calico-system/calico-typha-7d95f5fc4c-smdks" Jan 23 23:57:51.260881 kubelet[3215]: I0123 23:57:51.260876 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/51be87b1-830d-4d58-98b9-70ecf9942921-typha-certs\") pod \"calico-typha-7d95f5fc4c-smdks\" (UID: \"51be87b1-830d-4d58-98b9-70ecf9942921\") " pod="calico-system/calico-typha-7d95f5fc4c-smdks" Jan 23 23:57:51.261705 kubelet[3215]: I0123 23:57:51.260899 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51be87b1-830d-4d58-98b9-70ecf9942921-tigera-ca-bundle\") pod \"calico-typha-7d95f5fc4c-smdks\" (UID: \"51be87b1-830d-4d58-98b9-70ecf9942921\") " pod="calico-system/calico-typha-7d95f5fc4c-smdks" Jan 23 23:57:51.320285 systemd[1]: Created slice kubepods-besteffort-poda9b310c8_3686_4730_91e4_f25a2ca16338.slice - libcontainer container kubepods-besteffort-poda9b310c8_3686_4730_91e4_f25a2ca16338.slice. Jan 23 23:57:51.443115 containerd[1738]: time="2026-01-23T23:57:51.443073336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d95f5fc4c-smdks,Uid:51be87b1-830d-4d58-98b9-70ecf9942921,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:51.462279 kubelet[3215]: I0123 23:57:51.462241 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a9b310c8-3686-4730-91e4-f25a2ca16338-cni-bin-dir\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462279 kubelet[3215]: I0123 23:57:51.462282 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a9b310c8-3686-4730-91e4-f25a2ca16338-policysync\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462456 kubelet[3215]: I0123 23:57:51.462302 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9b310c8-3686-4730-91e4-f25a2ca16338-lib-modules\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462456 kubelet[3215]: I0123 23:57:51.462332 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a9b310c8-3686-4730-91e4-f25a2ca16338-cni-log-dir\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462456 kubelet[3215]: I0123 23:57:51.462349 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a9b310c8-3686-4730-91e4-f25a2ca16338-flexvol-driver-host\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462456 kubelet[3215]: I0123 23:57:51.462364 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9b310c8-3686-4730-91e4-f25a2ca16338-tigera-ca-bundle\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462456 kubelet[3215]: I0123 23:57:51.462383 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a9b310c8-3686-4730-91e4-f25a2ca16338-var-lib-calico\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462570 kubelet[3215]: I0123 23:57:51.462397 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a9b310c8-3686-4730-91e4-f25a2ca16338-cni-net-dir\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462570 kubelet[3215]: I0123 23:57:51.462440 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a9b310c8-3686-4730-91e4-f25a2ca16338-node-certs\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462570 kubelet[3215]: I0123 23:57:51.462457 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9b310c8-3686-4730-91e4-f25a2ca16338-xtables-lock\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462570 kubelet[3215]: I0123 23:57:51.462473 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5gvh\" (UniqueName: \"kubernetes.io/projected/a9b310c8-3686-4730-91e4-f25a2ca16338-kube-api-access-q5gvh\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.462570 kubelet[3215]: I0123 23:57:51.462488 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a9b310c8-3686-4730-91e4-f25a2ca16338-var-run-calico\") pod \"calico-node-9rvdv\" (UID: \"a9b310c8-3686-4730-91e4-f25a2ca16338\") " pod="calico-system/calico-node-9rvdv" Jan 23 23:57:51.488378 containerd[1738]: time="2026-01-23T23:57:51.488158177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:51.488378 containerd[1738]: time="2026-01-23T23:57:51.488209777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:51.488378 containerd[1738]: time="2026-01-23T23:57:51.488220417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:51.489150 containerd[1738]: time="2026-01-23T23:57:51.488298777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:51.516601 systemd[1]: Started cri-containerd-3b14b6226a25a1c35f76143e9008cefed5a11678d2a9a926c7b7a84f3a293c2d.scope - libcontainer container 3b14b6226a25a1c35f76143e9008cefed5a11678d2a9a926c7b7a84f3a293c2d. Jan 23 23:57:51.536564 kubelet[3215]: E0123 23:57:51.534207 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:57:51.573141 kubelet[3215]: E0123 23:57:51.573107 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.573141 kubelet[3215]: W0123 23:57:51.573130 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.573315 kubelet[3215]: E0123 23:57:51.573153 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.577665 containerd[1738]: time="2026-01-23T23:57:51.577602700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d95f5fc4c-smdks,Uid:51be87b1-830d-4d58-98b9-70ecf9942921,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b14b6226a25a1c35f76143e9008cefed5a11678d2a9a926c7b7a84f3a293c2d\"" Jan 23 23:57:51.579734 containerd[1738]: time="2026-01-23T23:57:51.579688375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 23:57:51.587059 kubelet[3215]: E0123 23:57:51.586395 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.587059 kubelet[3215]: W0123 23:57:51.586678 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.587059 kubelet[3215]: E0123 23:57:51.586706 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.635233 containerd[1738]: time="2026-01-23T23:57:51.635129308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9rvdv,Uid:a9b310c8-3686-4730-91e4-f25a2ca16338,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:51.665338 kubelet[3215]: E0123 23:57:51.664786 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.665650 kubelet[3215]: W0123 23:57:51.665515 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.665650 kubelet[3215]: E0123 23:57:51.665546 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.665650 kubelet[3215]: I0123 23:57:51.665591 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9e81a44d-05db-4251-91b5-ae7d0d2169e6-registration-dir\") pod \"csi-node-driver-kzqw2\" (UID: \"9e81a44d-05db-4251-91b5-ae7d0d2169e6\") " pod="calico-system/csi-node-driver-kzqw2" Jan 23 23:57:51.666608 kubelet[3215]: E0123 23:57:51.666478 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.666608 kubelet[3215]: W0123 23:57:51.666496 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.666608 kubelet[3215]: E0123 23:57:51.666527 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.666608 kubelet[3215]: I0123 23:57:51.666584 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9e81a44d-05db-4251-91b5-ae7d0d2169e6-socket-dir\") pod \"csi-node-driver-kzqw2\" (UID: \"9e81a44d-05db-4251-91b5-ae7d0d2169e6\") " pod="calico-system/csi-node-driver-kzqw2" Jan 23 23:57:51.667371 kubelet[3215]: E0123 23:57:51.667153 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.667371 kubelet[3215]: W0123 23:57:51.667167 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.667371 kubelet[3215]: E0123 23:57:51.667179 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.667716 kubelet[3215]: E0123 23:57:51.667560 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.667716 kubelet[3215]: W0123 23:57:51.667590 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.667716 kubelet[3215]: E0123 23:57:51.667601 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.668097 kubelet[3215]: E0123 23:57:51.668054 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.668097 kubelet[3215]: W0123 23:57:51.668066 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.668097 kubelet[3215]: E0123 23:57:51.668086 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.668321 kubelet[3215]: I0123 23:57:51.668191 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9e81a44d-05db-4251-91b5-ae7d0d2169e6-varrun\") pod \"csi-node-driver-kzqw2\" (UID: \"9e81a44d-05db-4251-91b5-ae7d0d2169e6\") " pod="calico-system/csi-node-driver-kzqw2" Jan 23 23:57:51.668847 kubelet[3215]: E0123 23:57:51.668770 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.668847 kubelet[3215]: W0123 23:57:51.668785 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.668847 kubelet[3215]: E0123 23:57:51.668796 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.669288 kubelet[3215]: E0123 23:57:51.669149 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.669288 kubelet[3215]: W0123 23:57:51.669163 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.669288 kubelet[3215]: E0123 23:57:51.669174 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.669584 kubelet[3215]: E0123 23:57:51.669491 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.669584 kubelet[3215]: W0123 23:57:51.669518 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.669584 kubelet[3215]: E0123 23:57:51.669529 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.669584 kubelet[3215]: I0123 23:57:51.669553 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5ppp\" (UniqueName: \"kubernetes.io/projected/9e81a44d-05db-4251-91b5-ae7d0d2169e6-kube-api-access-w5ppp\") pod \"csi-node-driver-kzqw2\" (UID: \"9e81a44d-05db-4251-91b5-ae7d0d2169e6\") " pod="calico-system/csi-node-driver-kzqw2" Jan 23 23:57:51.670073 kubelet[3215]: E0123 23:57:51.669958 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.670073 kubelet[3215]: W0123 23:57:51.669971 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.670073 kubelet[3215]: E0123 23:57:51.669981 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.670268 kubelet[3215]: I0123 23:57:51.670162 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e81a44d-05db-4251-91b5-ae7d0d2169e6-kubelet-dir\") pod \"csi-node-driver-kzqw2\" (UID: \"9e81a44d-05db-4251-91b5-ae7d0d2169e6\") " pod="calico-system/csi-node-driver-kzqw2" Jan 23 23:57:51.670668 kubelet[3215]: E0123 23:57:51.670549 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.670668 kubelet[3215]: W0123 23:57:51.670560 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.670668 kubelet[3215]: E0123 23:57:51.670570 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.670947 kubelet[3215]: E0123 23:57:51.670846 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.670947 kubelet[3215]: W0123 23:57:51.670871 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.670947 kubelet[3215]: E0123 23:57:51.670883 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.671258 kubelet[3215]: E0123 23:57:51.671219 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.671258 kubelet[3215]: W0123 23:57:51.671231 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.671601 kubelet[3215]: E0123 23:57:51.671242 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.672139 kubelet[3215]: E0123 23:57:51.672038 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.672139 kubelet[3215]: W0123 23:57:51.672051 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.672139 kubelet[3215]: E0123 23:57:51.672063 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.672467 kubelet[3215]: E0123 23:57:51.672359 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.672467 kubelet[3215]: W0123 23:57:51.672370 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.672467 kubelet[3215]: E0123 23:57:51.672380 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.672924 kubelet[3215]: E0123 23:57:51.672816 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.672924 kubelet[3215]: W0123 23:57:51.672828 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.672924 kubelet[3215]: E0123 23:57:51.672839 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.680241 containerd[1738]: time="2026-01-23T23:57:51.679934829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:51.680241 containerd[1738]: time="2026-01-23T23:57:51.679992509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:51.680241 containerd[1738]: time="2026-01-23T23:57:51.680003429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:51.680241 containerd[1738]: time="2026-01-23T23:57:51.680085909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:51.700573 systemd[1]: Started cri-containerd-4860c990e526eabd59aaaef107d0a483727aee7df501e08fc6de0af7b6b11284.scope - libcontainer container 4860c990e526eabd59aaaef107d0a483727aee7df501e08fc6de0af7b6b11284. Jan 23 23:57:51.721297 containerd[1738]: time="2026-01-23T23:57:51.721241000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9rvdv,Uid:a9b310c8-3686-4730-91e4-f25a2ca16338,Namespace:calico-system,Attempt:0,} returns sandbox id \"4860c990e526eabd59aaaef107d0a483727aee7df501e08fc6de0af7b6b11284\"" Jan 23 23:57:51.770829 kubelet[3215]: E0123 23:57:51.770797 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.770829 kubelet[3215]: W0123 23:57:51.770820 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.770829 kubelet[3215]: E0123 23:57:51.770840 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.771097 kubelet[3215]: E0123 23:57:51.771084 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.771132 kubelet[3215]: W0123 23:57:51.771098 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.771132 kubelet[3215]: E0123 23:57:51.771111 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.771347 kubelet[3215]: E0123 23:57:51.771332 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.771347 kubelet[3215]: W0123 23:57:51.771345 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.771445 kubelet[3215]: E0123 23:57:51.771355 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.771614 kubelet[3215]: E0123 23:57:51.771598 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.771670 kubelet[3215]: W0123 23:57:51.771614 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.771706 kubelet[3215]: E0123 23:57:51.771667 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.771923 kubelet[3215]: E0123 23:57:51.771906 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.771923 kubelet[3215]: W0123 23:57:51.771921 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.771995 kubelet[3215]: E0123 23:57:51.771932 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.772199 kubelet[3215]: E0123 23:57:51.772185 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.772199 kubelet[3215]: W0123 23:57:51.772197 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.772274 kubelet[3215]: E0123 23:57:51.772209 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.772461 kubelet[3215]: E0123 23:57:51.772448 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.772461 kubelet[3215]: W0123 23:57:51.772460 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.772525 kubelet[3215]: E0123 23:57:51.772484 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.772752 kubelet[3215]: E0123 23:57:51.772737 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.772752 kubelet[3215]: W0123 23:57:51.772750 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.772828 kubelet[3215]: E0123 23:57:51.772761 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.772966 kubelet[3215]: E0123 23:57:51.772953 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.772966 kubelet[3215]: W0123 23:57:51.772964 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.773020 kubelet[3215]: E0123 23:57:51.772973 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.773153 kubelet[3215]: E0123 23:57:51.773142 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.773153 kubelet[3215]: W0123 23:57:51.773152 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.773220 kubelet[3215]: E0123 23:57:51.773160 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.773349 kubelet[3215]: E0123 23:57:51.773335 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.773349 kubelet[3215]: W0123 23:57:51.773346 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.773405 kubelet[3215]: E0123 23:57:51.773356 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.773595 kubelet[3215]: E0123 23:57:51.773580 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.773595 kubelet[3215]: W0123 23:57:51.773593 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.773665 kubelet[3215]: E0123 23:57:51.773601 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.773819 kubelet[3215]: E0123 23:57:51.773807 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.773860 kubelet[3215]: W0123 23:57:51.773819 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.773860 kubelet[3215]: E0123 23:57:51.773829 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.774015 kubelet[3215]: E0123 23:57:51.773995 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.774015 kubelet[3215]: W0123 23:57:51.774012 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.774082 kubelet[3215]: E0123 23:57:51.774022 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.774194 kubelet[3215]: E0123 23:57:51.774183 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.774194 kubelet[3215]: W0123 23:57:51.774193 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.774250 kubelet[3215]: E0123 23:57:51.774204 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.774372 kubelet[3215]: E0123 23:57:51.774361 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.774372 kubelet[3215]: W0123 23:57:51.774371 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.774486 kubelet[3215]: E0123 23:57:51.774380 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.774686 kubelet[3215]: E0123 23:57:51.774673 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.774686 kubelet[3215]: W0123 23:57:51.774684 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.774751 kubelet[3215]: E0123 23:57:51.774694 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.774876 kubelet[3215]: E0123 23:57:51.774864 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.774876 kubelet[3215]: W0123 23:57:51.774875 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.774940 kubelet[3215]: E0123 23:57:51.774885 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.775099 kubelet[3215]: E0123 23:57:51.775087 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.775146 kubelet[3215]: W0123 23:57:51.775101 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.775146 kubelet[3215]: E0123 23:57:51.775111 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.776021 kubelet[3215]: E0123 23:57:51.775553 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.776021 kubelet[3215]: W0123 23:57:51.775572 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.776021 kubelet[3215]: E0123 23:57:51.775601 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.776021 kubelet[3215]: E0123 23:57:51.775954 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.776021 kubelet[3215]: W0123 23:57:51.775965 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.776021 kubelet[3215]: E0123 23:57:51.775994 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.776265 kubelet[3215]: E0123 23:57:51.776222 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.776265 kubelet[3215]: W0123 23:57:51.776232 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.776265 kubelet[3215]: E0123 23:57:51.776242 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.776443 kubelet[3215]: E0123 23:57:51.776397 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.776480 kubelet[3215]: W0123 23:57:51.776463 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.776480 kubelet[3215]: E0123 23:57:51.776475 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.777007 kubelet[3215]: E0123 23:57:51.776976 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.777073 kubelet[3215]: W0123 23:57:51.777032 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.777073 kubelet[3215]: E0123 23:57:51.777049 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.777394 kubelet[3215]: E0123 23:57:51.777358 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.777394 kubelet[3215]: W0123 23:57:51.777378 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.777533 kubelet[3215]: E0123 23:57:51.777391 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:51.786712 kubelet[3215]: E0123 23:57:51.786627 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:51.786712 kubelet[3215]: W0123 23:57:51.786653 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:51.786712 kubelet[3215]: E0123 23:57:51.786673 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:52.372073 systemd[1]: run-containerd-runc-k8s.io-3b14b6226a25a1c35f76143e9008cefed5a11678d2a9a926c7b7a84f3a293c2d-runc.ueJNzc.mount: Deactivated successfully. Jan 23 23:57:52.690094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224570378.mount: Deactivated successfully. Jan 23 23:57:53.236170 containerd[1738]: time="2026-01-23T23:57:53.236118515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:53.238583 containerd[1738]: time="2026-01-23T23:57:53.238533147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 23:57:53.240736 containerd[1738]: time="2026-01-23T23:57:53.240686901Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:53.244674 containerd[1738]: time="2026-01-23T23:57:53.244636969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:53.245447 containerd[1738]: time="2026-01-23T23:57:53.245226047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.665498272s" Jan 23 23:57:53.245447 containerd[1738]: time="2026-01-23T23:57:53.245261647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 23:57:53.247297 containerd[1738]: time="2026-01-23T23:57:53.247267201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:57:53.265896 containerd[1738]: time="2026-01-23T23:57:53.265736586Z" level=info msg="CreateContainer within sandbox \"3b14b6226a25a1c35f76143e9008cefed5a11678d2a9a926c7b7a84f3a293c2d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 23:57:53.306165 containerd[1738]: time="2026-01-23T23:57:53.306115265Z" level=info msg="CreateContainer within sandbox \"3b14b6226a25a1c35f76143e9008cefed5a11678d2a9a926c7b7a84f3a293c2d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8200c39c5fd581e378dc7fb7868789b9322f1f945f928da0f4f7dad76af9d21f\"" Jan 23 23:57:53.308282 containerd[1738]: time="2026-01-23T23:57:53.308234538Z" level=info msg="StartContainer for \"8200c39c5fd581e378dc7fb7868789b9322f1f945f928da0f4f7dad76af9d21f\"" Jan 23 23:57:53.333631 systemd[1]: Started cri-containerd-8200c39c5fd581e378dc7fb7868789b9322f1f945f928da0f4f7dad76af9d21f.scope - libcontainer container 8200c39c5fd581e378dc7fb7868789b9322f1f945f928da0f4f7dad76af9d21f. Jan 23 23:57:53.375010 containerd[1738]: time="2026-01-23T23:57:53.374955738Z" level=info msg="StartContainer for \"8200c39c5fd581e378dc7fb7868789b9322f1f945f928da0f4f7dad76af9d21f\" returns successfully" Jan 23 23:57:53.430096 kubelet[3215]: E0123 23:57:53.430049 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:57:53.577525 kubelet[3215]: E0123 23:57:53.577404 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.577789 kubelet[3215]: W0123 23:57:53.577656 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.577789 kubelet[3215]: E0123 23:57:53.577685 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.578066 kubelet[3215]: E0123 23:57:53.577926 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.578066 kubelet[3215]: W0123 23:57:53.577936 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.578066 kubelet[3215]: E0123 23:57:53.577984 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.579575 kubelet[3215]: E0123 23:57:53.579550 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.579789 kubelet[3215]: W0123 23:57:53.579773 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.580434 kubelet[3215]: E0123 23:57:53.580296 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.580697 kubelet[3215]: E0123 23:57:53.580575 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.580697 kubelet[3215]: W0123 23:57:53.580588 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.580697 kubelet[3215]: E0123 23:57:53.580601 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.580895 kubelet[3215]: E0123 23:57:53.580882 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.580958 kubelet[3215]: W0123 23:57:53.580947 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.581013 kubelet[3215]: E0123 23:57:53.581003 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.582743 kubelet[3215]: E0123 23:57:53.582725 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.582943 kubelet[3215]: W0123 23:57:53.582843 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.582943 kubelet[3215]: E0123 23:57:53.582863 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.583152 kubelet[3215]: E0123 23:57:53.583139 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.583313 kubelet[3215]: W0123 23:57:53.583208 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.583313 kubelet[3215]: E0123 23:57:53.583224 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.583488 kubelet[3215]: E0123 23:57:53.583475 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.583570 kubelet[3215]: W0123 23:57:53.583558 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.583657 kubelet[3215]: E0123 23:57:53.583617 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.584022 kubelet[3215]: E0123 23:57:53.583921 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.584022 kubelet[3215]: W0123 23:57:53.583934 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.584022 kubelet[3215]: E0123 23:57:53.583948 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.584502 kubelet[3215]: E0123 23:57:53.584487 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.584587 kubelet[3215]: W0123 23:57:53.584575 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.584641 kubelet[3215]: E0123 23:57:53.584632 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.584987 kubelet[3215]: E0123 23:57:53.584886 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.584987 kubelet[3215]: W0123 23:57:53.584898 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.584987 kubelet[3215]: E0123 23:57:53.584910 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.585211 kubelet[3215]: E0123 23:57:53.585199 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.585485 kubelet[3215]: W0123 23:57:53.585465 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.585659 kubelet[3215]: E0123 23:57:53.585545 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.586636 kubelet[3215]: E0123 23:57:53.586516 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.586636 kubelet[3215]: W0123 23:57:53.586533 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.586636 kubelet[3215]: E0123 23:57:53.586545 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.586836 kubelet[3215]: E0123 23:57:53.586823 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.586913 kubelet[3215]: W0123 23:57:53.586893 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.586976 kubelet[3215]: E0123 23:57:53.586965 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.587402 kubelet[3215]: E0123 23:57:53.587226 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.587402 kubelet[3215]: W0123 23:57:53.587238 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.587402 kubelet[3215]: E0123 23:57:53.587249 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.587804 kubelet[3215]: E0123 23:57:53.587788 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.588032 kubelet[3215]: W0123 23:57:53.588015 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.588100 kubelet[3215]: E0123 23:57:53.588089 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.588586 kubelet[3215]: E0123 23:57:53.588378 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.588687 kubelet[3215]: W0123 23:57:53.588673 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.588739 kubelet[3215]: E0123 23:57:53.588730 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.589988 kubelet[3215]: E0123 23:57:53.589690 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.589988 kubelet[3215]: W0123 23:57:53.589707 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.589988 kubelet[3215]: E0123 23:57:53.589720 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.590203 kubelet[3215]: E0123 23:57:53.590165 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.590962 kubelet[3215]: W0123 23:57:53.590791 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.590962 kubelet[3215]: E0123 23:57:53.590824 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.591153 kubelet[3215]: E0123 23:57:53.591140 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.591214 kubelet[3215]: W0123 23:57:53.591203 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.591272 kubelet[3215]: E0123 23:57:53.591262 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.591858 kubelet[3215]: E0123 23:57:53.591842 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.592010 kubelet[3215]: W0123 23:57:53.591930 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.592010 kubelet[3215]: E0123 23:57:53.591948 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.592658 kubelet[3215]: E0123 23:57:53.592636 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.592733 kubelet[3215]: W0123 23:57:53.592654 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.592733 kubelet[3215]: E0123 23:57:53.592678 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.593058 kubelet[3215]: E0123 23:57:53.593034 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.593321 kubelet[3215]: W0123 23:57:53.593188 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.593321 kubelet[3215]: E0123 23:57:53.593212 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.594505 kubelet[3215]: E0123 23:57:53.594464 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.594505 kubelet[3215]: W0123 23:57:53.594493 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.594625 kubelet[3215]: E0123 23:57:53.594511 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.594836 kubelet[3215]: E0123 23:57:53.594780 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.594836 kubelet[3215]: W0123 23:57:53.594792 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.594836 kubelet[3215]: E0123 23:57:53.594802 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.595156 kubelet[3215]: E0123 23:57:53.594953 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.595156 kubelet[3215]: W0123 23:57:53.594961 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.595156 kubelet[3215]: E0123 23:57:53.594969 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.595156 kubelet[3215]: E0123 23:57:53.595149 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.595156 kubelet[3215]: W0123 23:57:53.595157 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.595308 kubelet[3215]: E0123 23:57:53.595166 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.595362 kubelet[3215]: E0123 23:57:53.595349 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.595397 kubelet[3215]: W0123 23:57:53.595365 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.595397 kubelet[3215]: E0123 23:57:53.595375 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.595789 kubelet[3215]: E0123 23:57:53.595767 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.595789 kubelet[3215]: W0123 23:57:53.595783 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.595863 kubelet[3215]: E0123 23:57:53.595795 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.596563 kubelet[3215]: E0123 23:57:53.596541 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.596563 kubelet[3215]: W0123 23:57:53.596559 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.596657 kubelet[3215]: E0123 23:57:53.596574 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.598791 kubelet[3215]: E0123 23:57:53.598767 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.598791 kubelet[3215]: W0123 23:57:53.598783 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.598791 kubelet[3215]: E0123 23:57:53.598799 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.599358 kubelet[3215]: E0123 23:57:53.599333 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.599358 kubelet[3215]: W0123 23:57:53.599343 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.599358 kubelet[3215]: E0123 23:57:53.599354 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:53.599619 kubelet[3215]: E0123 23:57:53.599586 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:53.599619 kubelet[3215]: W0123 23:57:53.599599 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:53.599619 kubelet[3215]: E0123 23:57:53.599611 3215 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:54.405923 containerd[1738]: time="2026-01-23T23:57:54.405867082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:54.408375 containerd[1738]: time="2026-01-23T23:57:54.408333195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 23:57:54.411108 containerd[1738]: time="2026-01-23T23:57:54.411058147Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:54.415536 containerd[1738]: time="2026-01-23T23:57:54.415215776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:54.415968 containerd[1738]: time="2026-01-23T23:57:54.415939254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.168635653s" Jan 23 23:57:54.416017 containerd[1738]: time="2026-01-23T23:57:54.415971494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:57:54.422759 containerd[1738]: time="2026-01-23T23:57:54.422547595Z" level=info msg="CreateContainer within sandbox \"4860c990e526eabd59aaaef107d0a483727aee7df501e08fc6de0af7b6b11284\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:57:54.457790 containerd[1738]: time="2026-01-23T23:57:54.457744217Z" level=info msg="CreateContainer within sandbox \"4860c990e526eabd59aaaef107d0a483727aee7df501e08fc6de0af7b6b11284\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2e94196350a3530da34eac20e3b7c104df940fcabab75489c4f068f7b02beff2\"" Jan 23 23:57:54.458700 containerd[1738]: time="2026-01-23T23:57:54.458672054Z" level=info msg="StartContainer for \"2e94196350a3530da34eac20e3b7c104df940fcabab75489c4f068f7b02beff2\"" Jan 23 23:57:54.488595 systemd[1]: Started cri-containerd-2e94196350a3530da34eac20e3b7c104df940fcabab75489c4f068f7b02beff2.scope - libcontainer container 2e94196350a3530da34eac20e3b7c104df940fcabab75489c4f068f7b02beff2. Jan 23 23:57:54.518944 containerd[1738]: time="2026-01-23T23:57:54.518888365Z" level=info msg="StartContainer for \"2e94196350a3530da34eac20e3b7c104df940fcabab75489c4f068f7b02beff2\" returns successfully" Jan 23 23:57:54.535723 systemd[1]: cri-containerd-2e94196350a3530da34eac20e3b7c104df940fcabab75489c4f068f7b02beff2.scope: Deactivated successfully. Jan 23 23:57:54.538100 kubelet[3215]: I0123 23:57:54.536542 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:57:54.560112 kubelet[3215]: I0123 23:57:54.559908 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d95f5fc4c-smdks" podStartSLOduration=1.892847741 podStartE2EDuration="3.55989069s" podCreationTimestamp="2026-01-23 23:57:51 +0000 UTC" firstStartedPulling="2026-01-23 23:57:51.579120656 +0000 UTC m=+25.282767415" lastFinishedPulling="2026-01-23 23:57:53.246163605 +0000 UTC m=+26.949810364" observedRunningTime="2026-01-23 23:57:53.591646528 +0000 UTC m=+27.295293287" watchObservedRunningTime="2026-01-23 23:57:54.55989069 +0000 UTC m=+28.263537449" Jan 23 23:57:54.567070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e94196350a3530da34eac20e3b7c104df940fcabab75489c4f068f7b02beff2-rootfs.mount: Deactivated successfully. Jan 23 23:57:55.430913 kubelet[3215]: E0123 23:57:55.430466 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:57:55.574183 containerd[1738]: time="2026-01-23T23:57:55.573962046Z" level=info msg="shim disconnected" id=2e94196350a3530da34eac20e3b7c104df940fcabab75489c4f068f7b02beff2 namespace=k8s.io Jan 23 23:57:55.574183 containerd[1738]: time="2026-01-23T23:57:55.574017486Z" level=warning msg="cleaning up after shim disconnected" id=2e94196350a3530da34eac20e3b7c104df940fcabab75489c4f068f7b02beff2 namespace=k8s.io Jan 23 23:57:55.574183 containerd[1738]: time="2026-01-23T23:57:55.574027006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:56.544109 containerd[1738]: time="2026-01-23T23:57:56.543609567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:57:57.430544 kubelet[3215]: E0123 23:57:57.430482 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:57:58.778725 containerd[1738]: time="2026-01-23T23:57:58.778677898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:58.781070 containerd[1738]: time="2026-01-23T23:57:58.781037132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:57:58.784338 containerd[1738]: time="2026-01-23T23:57:58.784309322Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:58.788620 containerd[1738]: time="2026-01-23T23:57:58.787803633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:58.788620 containerd[1738]: time="2026-01-23T23:57:58.788503111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.244854505s" Jan 23 23:57:58.788620 containerd[1738]: time="2026-01-23T23:57:58.788529590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:57:58.795143 containerd[1738]: time="2026-01-23T23:57:58.795109012Z" level=info msg="CreateContainer within sandbox \"4860c990e526eabd59aaaef107d0a483727aee7df501e08fc6de0af7b6b11284\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:57:58.837794 containerd[1738]: time="2026-01-23T23:57:58.837747772Z" level=info msg="CreateContainer within sandbox \"4860c990e526eabd59aaaef107d0a483727aee7df501e08fc6de0af7b6b11284\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"99a7ec9de59d982cd4f2005e185f27c3f291ce369aff9367cb9cf6d984ca2a9a\"" Jan 23 23:57:58.839761 containerd[1738]: time="2026-01-23T23:57:58.838543650Z" level=info msg="StartContainer for \"99a7ec9de59d982cd4f2005e185f27c3f291ce369aff9367cb9cf6d984ca2a9a\"" Jan 23 23:57:58.878620 systemd[1]: Started cri-containerd-99a7ec9de59d982cd4f2005e185f27c3f291ce369aff9367cb9cf6d984ca2a9a.scope - libcontainer container 99a7ec9de59d982cd4f2005e185f27c3f291ce369aff9367cb9cf6d984ca2a9a. Jan 23 23:57:58.906937 containerd[1738]: time="2026-01-23T23:57:58.906812139Z" level=info msg="StartContainer for \"99a7ec9de59d982cd4f2005e185f27c3f291ce369aff9367cb9cf6d984ca2a9a\" returns successfully" Jan 23 23:57:59.430164 kubelet[3215]: E0123 23:57:59.430110 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:58:00.072468 containerd[1738]: time="2026-01-23T23:58:00.072399030Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:58:00.075302 systemd[1]: cri-containerd-99a7ec9de59d982cd4f2005e185f27c3f291ce369aff9367cb9cf6d984ca2a9a.scope: Deactivated successfully. Jan 23 23:58:00.095826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99a7ec9de59d982cd4f2005e185f27c3f291ce369aff9367cb9cf6d984ca2a9a-rootfs.mount: Deactivated successfully. Jan 23 23:58:00.098633 kubelet[3215]: I0123 23:58:00.098596 3215 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 23:58:00.921289 containerd[1738]: time="2026-01-23T23:58:00.921203169Z" level=info msg="shim disconnected" id=99a7ec9de59d982cd4f2005e185f27c3f291ce369aff9367cb9cf6d984ca2a9a namespace=k8s.io Jan 23 23:58:00.921289 containerd[1738]: time="2026-01-23T23:58:00.921280809Z" level=warning msg="cleaning up after shim disconnected" id=99a7ec9de59d982cd4f2005e185f27c3f291ce369aff9367cb9cf6d984ca2a9a namespace=k8s.io Jan 23 23:58:00.921289 containerd[1738]: time="2026-01-23T23:58:00.921292209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:00.923100 systemd[1]: Created slice kubepods-besteffort-pod8da9f149_a274_4b8e_a6f7_63da57140a84.slice - libcontainer container kubepods-besteffort-pod8da9f149_a274_4b8e_a6f7_63da57140a84.slice. Jan 23 23:58:00.936551 kubelet[3215]: I0123 23:58:00.936520 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf88f\" (UniqueName: \"kubernetes.io/projected/7b65147e-f60b-40c9-8c5d-17265b54435d-kube-api-access-sf88f\") pod \"coredns-66bc5c9577-nfkkc\" (UID: \"7b65147e-f60b-40c9-8c5d-17265b54435d\") " pod="kube-system/coredns-66bc5c9577-nfkkc" Jan 23 23:58:00.939959 kubelet[3215]: I0123 23:58:00.936607 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8da9f149-a274-4b8e-a6f7-63da57140a84-whisker-ca-bundle\") pod \"whisker-64578dd766-k2fwc\" (UID: \"8da9f149-a274-4b8e-a6f7-63da57140a84\") " pod="calico-system/whisker-64578dd766-k2fwc" Jan 23 23:58:00.939959 kubelet[3215]: I0123 23:58:00.936644 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b65147e-f60b-40c9-8c5d-17265b54435d-config-volume\") pod \"coredns-66bc5c9577-nfkkc\" (UID: \"7b65147e-f60b-40c9-8c5d-17265b54435d\") " pod="kube-system/coredns-66bc5c9577-nfkkc" Jan 23 23:58:00.939959 kubelet[3215]: I0123 23:58:00.938129 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8da9f149-a274-4b8e-a6f7-63da57140a84-whisker-backend-key-pair\") pod \"whisker-64578dd766-k2fwc\" (UID: \"8da9f149-a274-4b8e-a6f7-63da57140a84\") " pod="calico-system/whisker-64578dd766-k2fwc" Jan 23 23:58:00.939959 kubelet[3215]: I0123 23:58:00.938164 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfgx4\" (UniqueName: \"kubernetes.io/projected/4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f-kube-api-access-zfgx4\") pod \"goldmane-7c778bb748-wtg4h\" (UID: \"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f\") " pod="calico-system/goldmane-7c778bb748-wtg4h" Jan 23 23:58:00.939959 kubelet[3215]: I0123 23:58:00.939745 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnkvc\" (UniqueName: \"kubernetes.io/projected/8da9f149-a274-4b8e-a6f7-63da57140a84-kube-api-access-mnkvc\") pod \"whisker-64578dd766-k2fwc\" (UID: \"8da9f149-a274-4b8e-a6f7-63da57140a84\") " pod="calico-system/whisker-64578dd766-k2fwc" Jan 23 23:58:00.941387 kubelet[3215]: I0123 23:58:00.939769 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-wtg4h\" (UID: \"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f\") " pod="calico-system/goldmane-7c778bb748-wtg4h" Jan 23 23:58:00.941387 kubelet[3215]: I0123 23:58:00.940223 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f-config\") pod \"goldmane-7c778bb748-wtg4h\" (UID: \"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f\") " pod="calico-system/goldmane-7c778bb748-wtg4h" Jan 23 23:58:00.941387 kubelet[3215]: I0123 23:58:00.940274 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f-goldmane-key-pair\") pod \"goldmane-7c778bb748-wtg4h\" (UID: \"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f\") " pod="calico-system/goldmane-7c778bb748-wtg4h" Jan 23 23:58:00.943335 systemd[1]: Created slice kubepods-besteffort-pod9e81a44d_05db_4251_91b5_ae7d0d2169e6.slice - libcontainer container kubepods-besteffort-pod9e81a44d_05db_4251_91b5_ae7d0d2169e6.slice. Jan 23 23:58:00.953175 systemd[1]: Created slice kubepods-burstable-pod7b65147e_f60b_40c9_8c5d_17265b54435d.slice - libcontainer container kubepods-burstable-pod7b65147e_f60b_40c9_8c5d_17265b54435d.slice. Jan 23 23:58:00.953843 containerd[1738]: time="2026-01-23T23:58:00.953529239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kzqw2,Uid:9e81a44d-05db-4251-91b5-ae7d0d2169e6,Namespace:calico-system,Attempt:0,}" Jan 23 23:58:00.968968 systemd[1]: Created slice kubepods-besteffort-pod4d7c1094_ee9c_40ad_a8ec_c1150bd1b90f.slice - libcontainer container kubepods-besteffort-pod4d7c1094_ee9c_40ad_a8ec_c1150bd1b90f.slice. Jan 23 23:58:00.977532 systemd[1]: Created slice kubepods-besteffort-pod2db427fa_9e25_4d91_9748_361f655acfc7.slice - libcontainer container kubepods-besteffort-pod2db427fa_9e25_4d91_9748_361f655acfc7.slice. Jan 23 23:58:00.984136 systemd[1]: Created slice kubepods-burstable-pod840af08f_d1f8_4fdc_a3d8_e0970397bca1.slice - libcontainer container kubepods-burstable-pod840af08f_d1f8_4fdc_a3d8_e0970397bca1.slice. Jan 23 23:58:01.010072 systemd[1]: Created slice kubepods-besteffort-pod0adf700e_5270_411b_82bf_1b013a95c851.slice - libcontainer container kubepods-besteffort-pod0adf700e_5270_411b_82bf_1b013a95c851.slice. Jan 23 23:58:01.020389 systemd[1]: Created slice kubepods-besteffort-pod1a7bcaec_bb3f_491a_bd0f_d443085a7496.slice - libcontainer container kubepods-besteffort-pod1a7bcaec_bb3f_491a_bd0f_d443085a7496.slice. Jan 23 23:58:01.040991 kubelet[3215]: I0123 23:58:01.040939 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqg6h\" (UniqueName: \"kubernetes.io/projected/0adf700e-5270-411b-82bf-1b013a95c851-kube-api-access-dqg6h\") pod \"calico-apiserver-7ddd4879dc-2hqft\" (UID: \"0adf700e-5270-411b-82bf-1b013a95c851\") " pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" Jan 23 23:58:01.040991 kubelet[3215]: I0123 23:58:01.040979 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2db427fa-9e25-4d91-9748-361f655acfc7-calico-apiserver-certs\") pod \"calico-apiserver-7ddd4879dc-2rp8t\" (UID: \"2db427fa-9e25-4d91-9748-361f655acfc7\") " pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" Jan 23 23:58:01.040991 kubelet[3215]: I0123 23:58:01.041000 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0adf700e-5270-411b-82bf-1b013a95c851-calico-apiserver-certs\") pod \"calico-apiserver-7ddd4879dc-2hqft\" (UID: \"0adf700e-5270-411b-82bf-1b013a95c851\") " pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" Jan 23 23:58:01.041238 kubelet[3215]: I0123 23:58:01.041014 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840af08f-d1f8-4fdc-a3d8-e0970397bca1-config-volume\") pod \"coredns-66bc5c9577-rzsxf\" (UID: \"840af08f-d1f8-4fdc-a3d8-e0970397bca1\") " pod="kube-system/coredns-66bc5c9577-rzsxf" Jan 23 23:58:01.041238 kubelet[3215]: I0123 23:58:01.041031 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fshtg\" (UniqueName: \"kubernetes.io/projected/840af08f-d1f8-4fdc-a3d8-e0970397bca1-kube-api-access-fshtg\") pod \"coredns-66bc5c9577-rzsxf\" (UID: \"840af08f-d1f8-4fdc-a3d8-e0970397bca1\") " pod="kube-system/coredns-66bc5c9577-rzsxf" Jan 23 23:58:01.041238 kubelet[3215]: I0123 23:58:01.041044 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a7bcaec-bb3f-491a-bd0f-d443085a7496-tigera-ca-bundle\") pod \"calico-kube-controllers-6dbb8cd949-trksp\" (UID: \"1a7bcaec-bb3f-491a-bd0f-d443085a7496\") " pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" Jan 23 23:58:01.041238 kubelet[3215]: I0123 23:58:01.041073 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvvr\" (UniqueName: \"kubernetes.io/projected/2db427fa-9e25-4d91-9748-361f655acfc7-kube-api-access-vgvvr\") pod \"calico-apiserver-7ddd4879dc-2rp8t\" (UID: \"2db427fa-9e25-4d91-9748-361f655acfc7\") " pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" Jan 23 23:58:01.041238 kubelet[3215]: I0123 23:58:01.041113 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724pj\" (UniqueName: \"kubernetes.io/projected/1a7bcaec-bb3f-491a-bd0f-d443085a7496-kube-api-access-724pj\") pod \"calico-kube-controllers-6dbb8cd949-trksp\" (UID: \"1a7bcaec-bb3f-491a-bd0f-d443085a7496\") " pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" Jan 23 23:58:01.105681 containerd[1738]: time="2026-01-23T23:58:01.105635372Z" level=error msg="Failed to destroy network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.107691 containerd[1738]: time="2026-01-23T23:58:01.107647766Z" level=error msg="encountered an error cleaning up failed sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.108876 containerd[1738]: time="2026-01-23T23:58:01.107716446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kzqw2,Uid:9e81a44d-05db-4251-91b5-ae7d0d2169e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.108690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad-shm.mount: Deactivated successfully. Jan 23 23:58:01.110028 kubelet[3215]: E0123 23:58:01.107932 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.110028 kubelet[3215]: E0123 23:58:01.108005 3215 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kzqw2" Jan 23 23:58:01.110028 kubelet[3215]: E0123 23:58:01.108023 3215 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kzqw2" Jan 23 23:58:01.110395 kubelet[3215]: E0123 23:58:01.108073 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:58:01.240300 containerd[1738]: time="2026-01-23T23:58:01.239869595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64578dd766-k2fwc,Uid:8da9f149-a274-4b8e-a6f7-63da57140a84,Namespace:calico-system,Attempt:0,}" Jan 23 23:58:01.268805 containerd[1738]: time="2026-01-23T23:58:01.268765554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nfkkc,Uid:7b65147e-f60b-40c9-8c5d-17265b54435d,Namespace:kube-system,Attempt:0,}" Jan 23 23:58:01.287037 containerd[1738]: time="2026-01-23T23:58:01.286804704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wtg4h,Uid:4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f,Namespace:calico-system,Attempt:0,}" Jan 23 23:58:01.290314 containerd[1738]: time="2026-01-23T23:58:01.290280654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4879dc-2rp8t,Uid:2db427fa-9e25-4d91-9748-361f655acfc7,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:58:01.302831 containerd[1738]: time="2026-01-23T23:58:01.302472180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rzsxf,Uid:840af08f-d1f8-4fdc-a3d8-e0970397bca1,Namespace:kube-system,Attempt:0,}" Jan 23 23:58:01.319402 containerd[1738]: time="2026-01-23T23:58:01.319367253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4879dc-2hqft,Uid:0adf700e-5270-411b-82bf-1b013a95c851,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:58:01.331195 containerd[1738]: time="2026-01-23T23:58:01.331161179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dbb8cd949-trksp,Uid:1a7bcaec-bb3f-491a-bd0f-d443085a7496,Namespace:calico-system,Attempt:0,}" Jan 23 23:58:01.343204 containerd[1738]: time="2026-01-23T23:58:01.343164026Z" level=error msg="Failed to destroy network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.343731 containerd[1738]: time="2026-01-23T23:58:01.343639064Z" level=error msg="encountered an error cleaning up failed sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.343731 containerd[1738]: time="2026-01-23T23:58:01.343694624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64578dd766-k2fwc,Uid:8da9f149-a274-4b8e-a6f7-63da57140a84,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.344328 kubelet[3215]: E0123 23:58:01.344002 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.344328 kubelet[3215]: E0123 23:58:01.344049 3215 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64578dd766-k2fwc" Jan 23 23:58:01.344328 kubelet[3215]: E0123 23:58:01.344066 3215 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64578dd766-k2fwc" Jan 23 23:58:01.345653 kubelet[3215]: E0123 23:58:01.344112 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64578dd766-k2fwc_calico-system(8da9f149-a274-4b8e-a6f7-63da57140a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64578dd766-k2fwc_calico-system(8da9f149-a274-4b8e-a6f7-63da57140a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64578dd766-k2fwc" podUID="8da9f149-a274-4b8e-a6f7-63da57140a84" Jan 23 23:58:01.562466 kubelet[3215]: I0123 23:58:01.561338 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:01.563442 containerd[1738]: time="2026-01-23T23:58:01.563093289Z" level=info msg="StopPodSandbox for \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\"" Jan 23 23:58:01.563442 containerd[1738]: time="2026-01-23T23:58:01.563259849Z" level=info msg="Ensure that sandbox 5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d in task-service has been cleanup successfully" Jan 23 23:58:01.567397 containerd[1738]: time="2026-01-23T23:58:01.566935118Z" level=error msg="Failed to destroy network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.569659 containerd[1738]: time="2026-01-23T23:58:01.569349311Z" level=error msg="encountered an error cleaning up failed sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.571790 containerd[1738]: time="2026-01-23T23:58:01.571672065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nfkkc,Uid:7b65147e-f60b-40c9-8c5d-17265b54435d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.577165 kubelet[3215]: E0123 23:58:01.576943 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.577165 kubelet[3215]: E0123 23:58:01.577005 3215 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nfkkc" Jan 23 23:58:01.577165 kubelet[3215]: E0123 23:58:01.577024 3215 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nfkkc" Jan 23 23:58:01.577339 kubelet[3215]: E0123 23:58:01.577077 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nfkkc_kube-system(7b65147e-f60b-40c9-8c5d-17265b54435d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nfkkc_kube-system(7b65147e-f60b-40c9-8c5d-17265b54435d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nfkkc" podUID="7b65147e-f60b-40c9-8c5d-17265b54435d" Jan 23 23:58:01.588431 kubelet[3215]: I0123 23:58:01.587428 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:01.591433 containerd[1738]: time="2026-01-23T23:58:01.590633612Z" level=info msg="StopPodSandbox for \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\"" Jan 23 23:58:01.591433 containerd[1738]: time="2026-01-23T23:58:01.590794331Z" level=info msg="Ensure that sandbox cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad in task-service has been cleanup successfully" Jan 23 23:58:01.604786 containerd[1738]: time="2026-01-23T23:58:01.604753252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:58:01.681176 containerd[1738]: time="2026-01-23T23:58:01.680628252Z" level=error msg="StopPodSandbox for \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\" failed" error="failed to destroy network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.683387 kubelet[3215]: E0123 23:58:01.683334 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:01.683827 kubelet[3215]: E0123 23:58:01.683780 3215 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d"} Jan 23 23:58:01.683940 kubelet[3215]: E0123 23:58:01.683924 3215 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8da9f149-a274-4b8e-a6f7-63da57140a84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:58:01.684097 kubelet[3215]: E0123 23:58:01.684051 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8da9f149-a274-4b8e-a6f7-63da57140a84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64578dd766-k2fwc" podUID="8da9f149-a274-4b8e-a6f7-63da57140a84" Jan 23 23:58:01.700061 containerd[1738]: time="2026-01-23T23:58:01.700011042Z" level=error msg="Failed to destroy network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.700360 containerd[1738]: time="2026-01-23T23:58:01.700332841Z" level=error msg="encountered an error cleaning up failed sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.702838 containerd[1738]: time="2026-01-23T23:58:01.702791474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wtg4h,Uid:4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.703433 kubelet[3215]: E0123 23:58:01.703169 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.703433 kubelet[3215]: E0123 23:58:01.703227 3215 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wtg4h" Jan 23 23:58:01.703433 kubelet[3215]: E0123 23:58:01.703247 3215 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wtg4h" Jan 23 23:58:01.703610 kubelet[3215]: E0123 23:58:01.703292 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-wtg4h_calico-system(4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-wtg4h_calico-system(4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:58:01.706439 containerd[1738]: time="2026-01-23T23:58:01.706272945Z" level=error msg="Failed to destroy network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.707805 containerd[1738]: time="2026-01-23T23:58:01.707759981Z" level=error msg="encountered an error cleaning up failed sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.707954 containerd[1738]: time="2026-01-23T23:58:01.707909701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4879dc-2rp8t,Uid:2db427fa-9e25-4d91-9748-361f655acfc7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.708249 kubelet[3215]: E0123 23:58:01.708220 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.708501 kubelet[3215]: E0123 23:58:01.708362 3215 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" Jan 23 23:58:01.708501 kubelet[3215]: E0123 23:58:01.708388 3215 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" Jan 23 23:58:01.708501 kubelet[3215]: E0123 23:58:01.708463 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ddd4879dc-2rp8t_calico-apiserver(2db427fa-9e25-4d91-9748-361f655acfc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ddd4879dc-2rp8t_calico-apiserver(2db427fa-9e25-4d91-9748-361f655acfc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:58:01.731707 containerd[1738]: time="2026-01-23T23:58:01.731563039Z" level=error msg="Failed to destroy network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.732149 containerd[1738]: time="2026-01-23T23:58:01.732086238Z" level=error msg="encountered an error cleaning up failed sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.733023 containerd[1738]: time="2026-01-23T23:58:01.732993035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rzsxf,Uid:840af08f-d1f8-4fdc-a3d8-e0970397bca1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.733430 kubelet[3215]: E0123 23:58:01.733301 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.733430 kubelet[3215]: E0123 23:58:01.733358 3215 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rzsxf" Jan 23 23:58:01.733430 kubelet[3215]: E0123 23:58:01.733379 3215 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rzsxf" Jan 23 23:58:01.733557 kubelet[3215]: E0123 23:58:01.733434 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rzsxf_kube-system(840af08f-d1f8-4fdc-a3d8-e0970397bca1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rzsxf_kube-system(840af08f-d1f8-4fdc-a3d8-e0970397bca1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rzsxf" podUID="840af08f-d1f8-4fdc-a3d8-e0970397bca1" Jan 23 23:58:01.734183 containerd[1738]: time="2026-01-23T23:58:01.733956073Z" level=error msg="Failed to destroy network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.736006 containerd[1738]: time="2026-01-23T23:58:01.735972268Z" level=error msg="encountered an error cleaning up failed sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.736161 containerd[1738]: time="2026-01-23T23:58:01.736110827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dbb8cd949-trksp,Uid:1a7bcaec-bb3f-491a-bd0f-d443085a7496,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.736630 kubelet[3215]: E0123 23:58:01.736576 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.736630 kubelet[3215]: E0123 23:58:01.736626 3215 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" Jan 23 23:58:01.736833 kubelet[3215]: E0123 23:58:01.736646 3215 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" Jan 23 23:58:01.736833 kubelet[3215]: E0123 23:58:01.736693 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dbb8cd949-trksp_calico-system(1a7bcaec-bb3f-491a-bd0f-d443085a7496)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dbb8cd949-trksp_calico-system(1a7bcaec-bb3f-491a-bd0f-d443085a7496)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:58:01.739252 containerd[1738]: time="2026-01-23T23:58:01.739210219Z" level=error msg="StopPodSandbox for \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\" failed" error="failed to destroy network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.740056 kubelet[3215]: E0123 23:58:01.739787 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:01.740056 kubelet[3215]: E0123 23:58:01.739829 3215 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad"} Jan 23 23:58:01.740056 kubelet[3215]: E0123 23:58:01.739980 3215 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e81a44d-05db-4251-91b5-ae7d0d2169e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:58:01.740056 kubelet[3215]: E0123 23:58:01.740019 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e81a44d-05db-4251-91b5-ae7d0d2169e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:58:01.742753 containerd[1738]: time="2026-01-23T23:58:01.742711650Z" level=error msg="Failed to destroy network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.743135 containerd[1738]: time="2026-01-23T23:58:01.743113089Z" level=error msg="encountered an error cleaning up failed sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.743298 containerd[1738]: time="2026-01-23T23:58:01.743230969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4879dc-2hqft,Uid:0adf700e-5270-411b-82bf-1b013a95c851,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.744037 kubelet[3215]: E0123 23:58:01.743557 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:01.744037 kubelet[3215]: E0123 23:58:01.743604 3215 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" Jan 23 23:58:01.744037 kubelet[3215]: E0123 23:58:01.743620 3215 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" Jan 23 23:58:01.744205 kubelet[3215]: E0123 23:58:01.743664 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ddd4879dc-2hqft_calico-apiserver(0adf700e-5270-411b-82bf-1b013a95c851)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ddd4879dc-2hqft_calico-apiserver(0adf700e-5270-411b-82bf-1b013a95c851)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:58:02.590189 kubelet[3215]: I0123 23:58:02.590160 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:02.591589 containerd[1738]: time="2026-01-23T23:58:02.591090071Z" level=info msg="StopPodSandbox for \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\"" Jan 23 23:58:02.591589 containerd[1738]: time="2026-01-23T23:58:02.591465390Z" level=info msg="Ensure that sandbox 802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95 in task-service has been cleanup successfully" Jan 23 23:58:02.592918 kubelet[3215]: I0123 23:58:02.592650 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:02.593352 containerd[1738]: time="2026-01-23T23:58:02.593248585Z" level=info msg="StopPodSandbox for \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\"" Jan 23 23:58:02.594229 containerd[1738]: time="2026-01-23T23:58:02.593804544Z" level=info msg="Ensure that sandbox 589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de in task-service has been cleanup successfully" Jan 23 23:58:02.594300 kubelet[3215]: I0123 23:58:02.594203 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:02.596442 containerd[1738]: time="2026-01-23T23:58:02.595308420Z" level=info msg="StopPodSandbox for \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\"" Jan 23 23:58:02.596442 containerd[1738]: time="2026-01-23T23:58:02.596115818Z" level=info msg="Ensure that sandbox 0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473 in task-service has been cleanup successfully" Jan 23 23:58:02.601130 kubelet[3215]: I0123 23:58:02.601071 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:02.604186 containerd[1738]: time="2026-01-23T23:58:02.604155197Z" level=info msg="StopPodSandbox for \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\"" Jan 23 23:58:02.611371 containerd[1738]: time="2026-01-23T23:58:02.611322378Z" level=info msg="Ensure that sandbox 4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba in task-service has been cleanup successfully" Jan 23 23:58:02.616327 kubelet[3215]: I0123 23:58:02.616291 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:02.619121 containerd[1738]: time="2026-01-23T23:58:02.619079478Z" level=info msg="StopPodSandbox for \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\"" Jan 23 23:58:02.619330 containerd[1738]: time="2026-01-23T23:58:02.619309557Z" level=info msg="Ensure that sandbox 7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd in task-service has been cleanup successfully" Jan 23 23:58:02.624482 kubelet[3215]: I0123 23:58:02.624446 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:02.626597 containerd[1738]: time="2026-01-23T23:58:02.626544538Z" level=info msg="StopPodSandbox for \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\"" Jan 23 23:58:02.628114 containerd[1738]: time="2026-01-23T23:58:02.627533256Z" level=info msg="Ensure that sandbox 11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4 in task-service has been cleanup successfully" Jan 23 23:58:02.689914 containerd[1738]: time="2026-01-23T23:58:02.689867613Z" level=error msg="StopPodSandbox for \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\" failed" error="failed to destroy network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:02.690315 containerd[1738]: time="2026-01-23T23:58:02.689868973Z" level=error msg="StopPodSandbox for \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\" failed" error="failed to destroy network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:02.691612 kubelet[3215]: E0123 23:58:02.691574 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:02.691722 kubelet[3215]: E0123 23:58:02.691526 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:02.691722 kubelet[3215]: E0123 23:58:02.691664 3215 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de"} Jan 23 23:58:02.691722 kubelet[3215]: E0123 23:58:02.691698 3215 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a7bcaec-bb3f-491a-bd0f-d443085a7496\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:58:02.691826 kubelet[3215]: E0123 23:58:02.691619 3215 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95"} Jan 23 23:58:02.691826 kubelet[3215]: E0123 23:58:02.691750 3215 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2db427fa-9e25-4d91-9748-361f655acfc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:58:02.691826 kubelet[3215]: E0123 23:58:02.691770 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2db427fa-9e25-4d91-9748-361f655acfc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:58:02.691826 kubelet[3215]: E0123 23:58:02.691809 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a7bcaec-bb3f-491a-bd0f-d443085a7496\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:58:02.693304 containerd[1738]: time="2026-01-23T23:58:02.693261084Z" level=error msg="StopPodSandbox for \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\" failed" error="failed to destroy network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:02.693590 kubelet[3215]: E0123 23:58:02.693520 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:02.693590 kubelet[3215]: E0123 23:58:02.693598 3215 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473"} Jan 23 23:58:02.693718 kubelet[3215]: E0123 23:58:02.693625 3215 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:58:02.693718 kubelet[3215]: E0123 23:58:02.693647 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:58:02.702799 containerd[1738]: time="2026-01-23T23:58:02.702750339Z" level=error msg="StopPodSandbox for \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\" failed" error="failed to destroy network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:02.703038 kubelet[3215]: E0123 23:58:02.702960 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:02.703038 kubelet[3215]: E0123 23:58:02.703011 3215 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba"} Jan 23 23:58:02.703464 kubelet[3215]: E0123 23:58:02.703043 3215 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b65147e-f60b-40c9-8c5d-17265b54435d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:58:02.703464 kubelet[3215]: E0123 23:58:02.703070 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b65147e-f60b-40c9-8c5d-17265b54435d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nfkkc" podUID="7b65147e-f60b-40c9-8c5d-17265b54435d" Jan 23 23:58:02.705190 containerd[1738]: time="2026-01-23T23:58:02.705139693Z" level=error msg="StopPodSandbox for \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\" failed" error="failed to destroy network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:02.705501 kubelet[3215]: E0123 23:58:02.705457 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:02.705563 kubelet[3215]: E0123 23:58:02.705512 3215 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4"} Jan 23 23:58:02.705563 kubelet[3215]: E0123 23:58:02.705540 3215 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0adf700e-5270-411b-82bf-1b013a95c851\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:58:02.705640 kubelet[3215]: E0123 23:58:02.705567 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0adf700e-5270-411b-82bf-1b013a95c851\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:58:02.707356 containerd[1738]: time="2026-01-23T23:58:02.707311127Z" level=error msg="StopPodSandbox for \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\" failed" error="failed to destroy network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:58:02.707587 kubelet[3215]: E0123 23:58:02.707549 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:02.707629 kubelet[3215]: E0123 23:58:02.707596 3215 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd"} Jan 23 23:58:02.707629 kubelet[3215]: E0123 23:58:02.707622 3215 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"840af08f-d1f8-4fdc-a3d8-e0970397bca1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:58:02.707705 kubelet[3215]: E0123 23:58:02.707641 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"840af08f-d1f8-4fdc-a3d8-e0970397bca1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rzsxf" podUID="840af08f-d1f8-4fdc-a3d8-e0970397bca1" Jan 23 23:58:06.078916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24232323.mount: Deactivated successfully. Jan 23 23:58:06.486867 containerd[1738]: time="2026-01-23T23:58:06.486817282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:06.489796 containerd[1738]: time="2026-01-23T23:58:06.489554994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 23:58:06.492397 containerd[1738]: time="2026-01-23T23:58:06.492368867Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:06.495958 containerd[1738]: time="2026-01-23T23:58:06.495927698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:06.497142 containerd[1738]: time="2026-01-23T23:58:06.496575136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.891639884s" Jan 23 23:58:06.497142 containerd[1738]: time="2026-01-23T23:58:06.496604376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:58:06.515404 containerd[1738]: time="2026-01-23T23:58:06.515242767Z" level=info msg="CreateContainer within sandbox \"4860c990e526eabd59aaaef107d0a483727aee7df501e08fc6de0af7b6b11284\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:58:06.557184 containerd[1738]: time="2026-01-23T23:58:06.557104538Z" level=info msg="CreateContainer within sandbox \"4860c990e526eabd59aaaef107d0a483727aee7df501e08fc6de0af7b6b11284\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"49d41bde9c0e5be0ee731d5b1414e8ecd90cb8e6882869012a31193ebb74f2f7\"" Jan 23 23:58:06.558480 containerd[1738]: time="2026-01-23T23:58:06.557746256Z" level=info msg="StartContainer for \"49d41bde9c0e5be0ee731d5b1414e8ecd90cb8e6882869012a31193ebb74f2f7\"" Jan 23 23:58:06.588660 systemd[1]: Started cri-containerd-49d41bde9c0e5be0ee731d5b1414e8ecd90cb8e6882869012a31193ebb74f2f7.scope - libcontainer container 49d41bde9c0e5be0ee731d5b1414e8ecd90cb8e6882869012a31193ebb74f2f7. Jan 23 23:58:06.635803 containerd[1738]: time="2026-01-23T23:58:06.635679012Z" level=info msg="StartContainer for \"49d41bde9c0e5be0ee731d5b1414e8ecd90cb8e6882869012a31193ebb74f2f7\" returns successfully" Jan 23 23:58:06.890511 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:58:06.890610 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:58:06.996830 kubelet[3215]: I0123 23:58:06.996760 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9rvdv" podStartSLOduration=1.221741771 podStartE2EDuration="15.996741308s" podCreationTimestamp="2026-01-23 23:57:51 +0000 UTC" firstStartedPulling="2026-01-23 23:57:51.722567556 +0000 UTC m=+25.426214315" lastFinishedPulling="2026-01-23 23:58:06.497567093 +0000 UTC m=+40.201213852" observedRunningTime="2026-01-23 23:58:06.666023493 +0000 UTC m=+40.369670252" watchObservedRunningTime="2026-01-23 23:58:06.996741308 +0000 UTC m=+40.700388067" Jan 23 23:58:07.003332 containerd[1738]: time="2026-01-23T23:58:07.003027611Z" level=info msg="StopPodSandbox for \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\"" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.114 [INFO][4373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.115 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" iface="eth0" netns="/var/run/netns/cni-0ee2ce4f-685d-6e2f-3a88-04405b51acc4" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.115 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" iface="eth0" netns="/var/run/netns/cni-0ee2ce4f-685d-6e2f-3a88-04405b51acc4" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.119 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" iface="eth0" netns="/var/run/netns/cni-0ee2ce4f-685d-6e2f-3a88-04405b51acc4" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.119 [INFO][4373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.119 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.148 [INFO][4385] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" HandleID="k8s-pod-network.5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.148 [INFO][4385] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.149 [INFO][4385] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.162 [WARNING][4385] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" HandleID="k8s-pod-network.5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.162 [INFO][4385] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" HandleID="k8s-pod-network.5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.167 [INFO][4385] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:07.172576 containerd[1738]: 2026-01-23 23:58:07.170 [INFO][4373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:07.175340 systemd[1]: run-netns-cni\x2d0ee2ce4f\x2d685d\x2d6e2f\x2d3a88\x2d04405b51acc4.mount: Deactivated successfully. Jan 23 23:58:07.175869 containerd[1738]: time="2026-01-23T23:58:07.175704320Z" level=info msg="TearDown network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\" successfully" Jan 23 23:58:07.175869 containerd[1738]: time="2026-01-23T23:58:07.175739080Z" level=info msg="StopPodSandbox for \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\" returns successfully" Jan 23 23:58:07.288298 kubelet[3215]: I0123 23:58:07.287110 3215 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8da9f149-a274-4b8e-a6f7-63da57140a84-whisker-backend-key-pair\") pod \"8da9f149-a274-4b8e-a6f7-63da57140a84\" (UID: \"8da9f149-a274-4b8e-a6f7-63da57140a84\") " Jan 23 23:58:07.288298 kubelet[3215]: I0123 23:58:07.287180 3215 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8da9f149-a274-4b8e-a6f7-63da57140a84-whisker-ca-bundle\") pod \"8da9f149-a274-4b8e-a6f7-63da57140a84\" (UID: \"8da9f149-a274-4b8e-a6f7-63da57140a84\") " Jan 23 23:58:07.288298 kubelet[3215]: I0123 23:58:07.287207 3215 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnkvc\" (UniqueName: \"kubernetes.io/projected/8da9f149-a274-4b8e-a6f7-63da57140a84-kube-api-access-mnkvc\") pod \"8da9f149-a274-4b8e-a6f7-63da57140a84\" (UID: \"8da9f149-a274-4b8e-a6f7-63da57140a84\") " Jan 23 23:58:07.288298 kubelet[3215]: I0123 23:58:07.288212 3215 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8da9f149-a274-4b8e-a6f7-63da57140a84-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8da9f149-a274-4b8e-a6f7-63da57140a84" (UID: "8da9f149-a274-4b8e-a6f7-63da57140a84"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:58:07.292699 kubelet[3215]: I0123 23:58:07.292659 3215 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8da9f149-a274-4b8e-a6f7-63da57140a84-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8da9f149-a274-4b8e-a6f7-63da57140a84" (UID: "8da9f149-a274-4b8e-a6f7-63da57140a84"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:58:07.293217 systemd[1]: var-lib-kubelet-pods-8da9f149\x2da274\x2d4b8e\x2da6f7\x2d63da57140a84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmnkvc.mount: Deactivated successfully. Jan 23 23:58:07.293486 kubelet[3215]: I0123 23:58:07.293454 3215 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da9f149-a274-4b8e-a6f7-63da57140a84-kube-api-access-mnkvc" (OuterVolumeSpecName: "kube-api-access-mnkvc") pod "8da9f149-a274-4b8e-a6f7-63da57140a84" (UID: "8da9f149-a274-4b8e-a6f7-63da57140a84"). InnerVolumeSpecName "kube-api-access-mnkvc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:07.296498 systemd[1]: var-lib-kubelet-pods-8da9f149\x2da274\x2d4b8e\x2da6f7\x2d63da57140a84-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 23:58:07.387619 kubelet[3215]: I0123 23:58:07.387566 3215 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8da9f149-a274-4b8e-a6f7-63da57140a84-whisker-ca-bundle\") on node \"ci-4081.3.6-n-31deed6810\" DevicePath \"\"" Jan 23 23:58:07.387619 kubelet[3215]: I0123 23:58:07.387596 3215 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mnkvc\" (UniqueName: \"kubernetes.io/projected/8da9f149-a274-4b8e-a6f7-63da57140a84-kube-api-access-mnkvc\") on node \"ci-4081.3.6-n-31deed6810\" DevicePath \"\"" Jan 23 23:58:07.387619 kubelet[3215]: I0123 23:58:07.387605 3215 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8da9f149-a274-4b8e-a6f7-63da57140a84-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-31deed6810\" DevicePath \"\"" Jan 23 23:58:07.651471 systemd[1]: Removed slice kubepods-besteffort-pod8da9f149_a274_4b8e_a6f7_63da57140a84.slice - libcontainer container kubepods-besteffort-pod8da9f149_a274_4b8e_a6f7_63da57140a84.slice. Jan 23 23:58:07.726805 systemd[1]: Created slice kubepods-besteffort-podc69a3a9c_9be4_419b_bd7b_2c7c74ce300e.slice - libcontainer container kubepods-besteffort-podc69a3a9c_9be4_419b_bd7b_2c7c74ce300e.slice. Jan 23 23:58:07.790769 kubelet[3215]: I0123 23:58:07.790731 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c69a3a9c-9be4-419b-bd7b-2c7c74ce300e-whisker-backend-key-pair\") pod \"whisker-58bcdfdb7b-m76fr\" (UID: \"c69a3a9c-9be4-419b-bd7b-2c7c74ce300e\") " pod="calico-system/whisker-58bcdfdb7b-m76fr" Jan 23 23:58:07.790769 kubelet[3215]: I0123 23:58:07.790776 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hr8q\" (UniqueName: \"kubernetes.io/projected/c69a3a9c-9be4-419b-bd7b-2c7c74ce300e-kube-api-access-5hr8q\") pod \"whisker-58bcdfdb7b-m76fr\" (UID: \"c69a3a9c-9be4-419b-bd7b-2c7c74ce300e\") " pod="calico-system/whisker-58bcdfdb7b-m76fr" Jan 23 23:58:07.790938 kubelet[3215]: I0123 23:58:07.790796 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c69a3a9c-9be4-419b-bd7b-2c7c74ce300e-whisker-ca-bundle\") pod \"whisker-58bcdfdb7b-m76fr\" (UID: \"c69a3a9c-9be4-419b-bd7b-2c7c74ce300e\") " pod="calico-system/whisker-58bcdfdb7b-m76fr" Jan 23 23:58:08.037540 containerd[1738]: time="2026-01-23T23:58:08.037497346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58bcdfdb7b-m76fr,Uid:c69a3a9c-9be4-419b-bd7b-2c7c74ce300e,Namespace:calico-system,Attempt:0,}" Jan 23 23:58:08.211471 systemd-networkd[1363]: cali25f081f8809: Link UP Jan 23 23:58:08.212380 systemd-networkd[1363]: cali25f081f8809: Gained carrier Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.102 [INFO][4408] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.115 [INFO][4408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0 whisker-58bcdfdb7b- calico-system c69a3a9c-9be4-419b-bd7b-2c7c74ce300e 886 0 2026-01-23 23:58:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58bcdfdb7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-31deed6810 whisker-58bcdfdb7b-m76fr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali25f081f8809 [] [] }} ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Namespace="calico-system" Pod="whisker-58bcdfdb7b-m76fr" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.115 [INFO][4408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Namespace="calico-system" Pod="whisker-58bcdfdb7b-m76fr" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.138 [INFO][4420] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" HandleID="k8s-pod-network.9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.139 [INFO][4420] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" HandleID="k8s-pod-network.9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-31deed6810", "pod":"whisker-58bcdfdb7b-m76fr", "timestamp":"2026-01-23 23:58:08.138867641 +0000 UTC"}, Hostname:"ci-4081.3.6-n-31deed6810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.139 [INFO][4420] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.139 [INFO][4420] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.139 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-31deed6810' Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.148 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.151 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.155 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.156 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.158 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.158 [INFO][4420] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.161 [INFO][4420] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.167 [INFO][4420] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.173 [INFO][4420] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.1/26] block=192.168.70.0/26 handle="k8s-pod-network.9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.174 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.1/26] handle="k8s-pod-network.9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.174 [INFO][4420] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:08.230783 containerd[1738]: 2026-01-23 23:58:08.174 [INFO][4420] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.1/26] IPv6=[] ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" HandleID="k8s-pod-network.9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" Jan 23 23:58:08.231383 containerd[1738]: 2026-01-23 23:58:08.177 [INFO][4408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Namespace="calico-system" Pod="whisker-58bcdfdb7b-m76fr" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0", GenerateName:"whisker-58bcdfdb7b-", Namespace:"calico-system", SelfLink:"", UID:"c69a3a9c-9be4-419b-bd7b-2c7c74ce300e", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58bcdfdb7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"", Pod:"whisker-58bcdfdb7b-m76fr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali25f081f8809", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:08.231383 containerd[1738]: 2026-01-23 23:58:08.177 [INFO][4408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.1/32] ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Namespace="calico-system" Pod="whisker-58bcdfdb7b-m76fr" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" Jan 23 23:58:08.231383 containerd[1738]: 2026-01-23 23:58:08.177 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25f081f8809 ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Namespace="calico-system" Pod="whisker-58bcdfdb7b-m76fr" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" Jan 23 23:58:08.231383 containerd[1738]: 2026-01-23 23:58:08.213 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Namespace="calico-system" Pod="whisker-58bcdfdb7b-m76fr" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" Jan 23 23:58:08.231383 containerd[1738]: 2026-01-23 23:58:08.213 [INFO][4408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Namespace="calico-system" Pod="whisker-58bcdfdb7b-m76fr" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0", GenerateName:"whisker-58bcdfdb7b-", Namespace:"calico-system", SelfLink:"", UID:"c69a3a9c-9be4-419b-bd7b-2c7c74ce300e", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 58, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58bcdfdb7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b", Pod:"whisker-58bcdfdb7b-m76fr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali25f081f8809", MAC:"4e:48:8e:e6:3b:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:08.231383 containerd[1738]: 2026-01-23 23:58:08.228 [INFO][4408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b" Namespace="calico-system" Pod="whisker-58bcdfdb7b-m76fr" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--58bcdfdb7b--m76fr-eth0" Jan 23 23:58:08.251129 containerd[1738]: time="2026-01-23T23:58:08.250749428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:08.251129 containerd[1738]: time="2026-01-23T23:58:08.250821308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:08.251129 containerd[1738]: time="2026-01-23T23:58:08.250836668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:08.251129 containerd[1738]: time="2026-01-23T23:58:08.250925187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:08.283273 systemd[1]: run-containerd-runc-k8s.io-9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b-runc.eY3Rbi.mount: Deactivated successfully. Jan 23 23:58:08.302836 systemd[1]: Started cri-containerd-9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b.scope - libcontainer container 9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b. Jan 23 23:58:08.379212 containerd[1738]: time="2026-01-23T23:58:08.379090612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58bcdfdb7b-m76fr,Uid:c69a3a9c-9be4-419b-bd7b-2c7c74ce300e,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e968355176e6e13c33136e9720214d77872aa38ecc83cb5db6118f6b697556b\"" Jan 23 23:58:08.382229 containerd[1738]: time="2026-01-23T23:58:08.382000965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:58:08.435178 kubelet[3215]: I0123 23:58:08.434913 3215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da9f149-a274-4b8e-a6f7-63da57140a84" path="/var/lib/kubelet/pods/8da9f149-a274-4b8e-a6f7-63da57140a84/volumes" Jan 23 23:58:08.626791 kubelet[3215]: I0123 23:58:08.626622 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:58:08.630526 containerd[1738]: time="2026-01-23T23:58:08.630456115Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:08.632690 containerd[1738]: time="2026-01-23T23:58:08.632644509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:58:08.632779 containerd[1738]: time="2026-01-23T23:58:08.632752389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:58:08.633340 kubelet[3215]: E0123 23:58:08.632894 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:08.633340 kubelet[3215]: E0123 23:58:08.632942 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:08.633340 kubelet[3215]: E0123 23:58:08.633011 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-58bcdfdb7b-m76fr_calico-system(c69a3a9c-9be4-419b-bd7b-2c7c74ce300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:08.634226 containerd[1738]: time="2026-01-23T23:58:08.634198985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:58:08.901623 containerd[1738]: time="2026-01-23T23:58:08.901502969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:08.904019 containerd[1738]: time="2026-01-23T23:58:08.903966042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:58:08.904109 containerd[1738]: time="2026-01-23T23:58:08.904081882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:08.904462 kubelet[3215]: E0123 23:58:08.904248 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:08.904462 kubelet[3215]: E0123 23:58:08.904299 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:08.904462 kubelet[3215]: E0123 23:58:08.904371 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-58bcdfdb7b-m76fr_calico-system(c69a3a9c-9be4-419b-bd7b-2c7c74ce300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:08.904619 kubelet[3215]: E0123 23:58:08.904419 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 23 23:58:09.651791 kubelet[3215]: E0123 23:58:09.651740 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 23 23:58:09.836450 kernel: bpftool[4627]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 23 23:58:10.039213 systemd-networkd[1363]: vxlan.calico: Link UP Jan 23 23:58:10.039227 systemd-networkd[1363]: vxlan.calico: Gained carrier Jan 23 23:58:10.059624 systemd-networkd[1363]: cali25f081f8809: Gained IPv6LL Jan 23 23:58:11.338597 systemd-networkd[1363]: vxlan.calico: Gained IPv6LL Jan 23 23:58:11.479400 kubelet[3215]: I0123 23:58:11.478873 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:58:14.432871 containerd[1738]: time="2026-01-23T23:58:14.432820458Z" level=info msg="StopPodSandbox for \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\"" Jan 23 23:58:14.434017 containerd[1738]: time="2026-01-23T23:58:14.433471177Z" level=info msg="StopPodSandbox for \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\"" Jan 23 23:58:14.436567 containerd[1738]: time="2026-01-23T23:58:14.434569654Z" level=info msg="StopPodSandbox for \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\"" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.520 [INFO][4776] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.520 [INFO][4776] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" iface="eth0" netns="/var/run/netns/cni-43dfd73a-3652-eaf6-970b-37f220c91a08" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.522 [INFO][4776] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" iface="eth0" netns="/var/run/netns/cni-43dfd73a-3652-eaf6-970b-37f220c91a08" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.522 [INFO][4776] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" iface="eth0" netns="/var/run/netns/cni-43dfd73a-3652-eaf6-970b-37f220c91a08" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.522 [INFO][4776] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.522 [INFO][4776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.580 [INFO][4796] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" HandleID="k8s-pod-network.7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.581 [INFO][4796] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.581 [INFO][4796] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.602 [WARNING][4796] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" HandleID="k8s-pod-network.7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.602 [INFO][4796] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" HandleID="k8s-pod-network.7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.607 [INFO][4796] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:14.621018 containerd[1738]: 2026-01-23 23:58:14.613 [INFO][4776] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:14.624480 containerd[1738]: time="2026-01-23T23:58:14.621496680Z" level=info msg="TearDown network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\" successfully" Jan 23 23:58:14.624480 containerd[1738]: time="2026-01-23T23:58:14.621536199Z" level=info msg="StopPodSandbox for \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\" returns successfully" Jan 23 23:58:14.626712 systemd[1]: run-netns-cni\x2d43dfd73a\x2d3652\x2deaf6\x2d970b\x2d37f220c91a08.mount: Deactivated successfully. Jan 23 23:58:14.633971 containerd[1738]: time="2026-01-23T23:58:14.633930007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rzsxf,Uid:840af08f-d1f8-4fdc-a3d8-e0970397bca1,Namespace:kube-system,Attempt:1,}" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.545 [INFO][4777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.549 [INFO][4777] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" iface="eth0" netns="/var/run/netns/cni-6987f510-0528-2e8c-c827-a4b46df990ca" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.549 [INFO][4777] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" iface="eth0" netns="/var/run/netns/cni-6987f510-0528-2e8c-c827-a4b46df990ca" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.550 [INFO][4777] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" iface="eth0" netns="/var/run/netns/cni-6987f510-0528-2e8c-c827-a4b46df990ca" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.550 [INFO][4777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.551 [INFO][4777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.629 [INFO][4801] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" HandleID="k8s-pod-network.cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.631 [INFO][4801] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.631 [INFO][4801] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.646 [WARNING][4801] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" HandleID="k8s-pod-network.cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.646 [INFO][4801] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" HandleID="k8s-pod-network.cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.649 [INFO][4801] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:14.655611 containerd[1738]: 2026-01-23 23:58:14.652 [INFO][4777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:14.661770 containerd[1738]: time="2026-01-23T23:58:14.655745509Z" level=info msg="TearDown network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\" successfully" Jan 23 23:58:14.661770 containerd[1738]: time="2026-01-23T23:58:14.655771229Z" level=info msg="StopPodSandbox for \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\" returns successfully" Jan 23 23:58:14.663119 containerd[1738]: time="2026-01-23T23:58:14.662985690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kzqw2,Uid:9e81a44d-05db-4251-91b5-ae7d0d2169e6,Namespace:calico-system,Attempt:1,}" Jan 23 23:58:14.663587 systemd[1]: run-netns-cni\x2d6987f510\x2d0528\x2d2e8c\x2dc827\x2da4b46df990ca.mount: Deactivated successfully. Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.558 [INFO][4784] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.558 [INFO][4784] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" iface="eth0" netns="/var/run/netns/cni-465bce66-cb1f-f80c-0ed3-5a808b6844f1" Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.558 [INFO][4784] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" iface="eth0" netns="/var/run/netns/cni-465bce66-cb1f-f80c-0ed3-5a808b6844f1" Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.564 [INFO][4784] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" iface="eth0" netns="/var/run/netns/cni-465bce66-cb1f-f80c-0ed3-5a808b6844f1" Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.564 [INFO][4784] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.564 [INFO][4784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.643 [INFO][4806] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" HandleID="k8s-pod-network.11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.643 [INFO][4806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.649 [INFO][4806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.676 [WARNING][4806] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" HandleID="k8s-pod-network.11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.676 [INFO][4806] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" HandleID="k8s-pod-network.11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.682 [INFO][4806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:14.692396 containerd[1738]: 2026-01-23 23:58:14.687 [INFO][4784] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:14.695391 systemd[1]: run-netns-cni\x2d465bce66\x2dcb1f\x2df80c\x2d0ed3\x2d5a808b6844f1.mount: Deactivated successfully. Jan 23 23:58:14.700255 containerd[1738]: time="2026-01-23T23:58:14.698481356Z" level=info msg="TearDown network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\" successfully" Jan 23 23:58:14.700255 containerd[1738]: time="2026-01-23T23:58:14.698515996Z" level=info msg="StopPodSandbox for \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\" returns successfully" Jan 23 23:58:14.705171 containerd[1738]: time="2026-01-23T23:58:14.705124659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4879dc-2hqft,Uid:0adf700e-5270-411b-82bf-1b013a95c851,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:58:14.903674 systemd-networkd[1363]: cali722a86487f2: Link UP Jan 23 23:58:14.908660 systemd-networkd[1363]: cali722a86487f2: Gained carrier Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.753 [INFO][4817] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0 coredns-66bc5c9577- kube-system 840af08f-d1f8-4fdc-a3d8-e0970397bca1 925 0 2026-01-23 23:57:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-31deed6810 coredns-66bc5c9577-rzsxf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali722a86487f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-rzsxf" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.754 [INFO][4817] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-rzsxf" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.811 [INFO][4829] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" HandleID="k8s-pod-network.7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.812 [INFO][4829] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" HandleID="k8s-pod-network.7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3a20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-31deed6810", "pod":"coredns-66bc5c9577-rzsxf", "timestamp":"2026-01-23 23:58:14.811555217 +0000 UTC"}, Hostname:"ci-4081.3.6-n-31deed6810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.812 [INFO][4829] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.813 [INFO][4829] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.813 [INFO][4829] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-31deed6810' Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.831 [INFO][4829] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.851 [INFO][4829] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.864 [INFO][4829] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.868 [INFO][4829] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.871 [INFO][4829] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.872 [INFO][4829] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.877 [INFO][4829] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.887 [INFO][4829] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.895 [INFO][4829] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.2/26] block=192.168.70.0/26 handle="k8s-pod-network.7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.895 [INFO][4829] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.2/26] handle="k8s-pod-network.7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.895 [INFO][4829] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:14.939547 containerd[1738]: 2026-01-23 23:58:14.895 [INFO][4829] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.2/26] IPv6=[] ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" HandleID="k8s-pod-network.7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.940081 containerd[1738]: 2026-01-23 23:58:14.898 [INFO][4817] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-rzsxf" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"840af08f-d1f8-4fdc-a3d8-e0970397bca1", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"", Pod:"coredns-66bc5c9577-rzsxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali722a86487f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:14.940081 containerd[1738]: 2026-01-23 23:58:14.898 [INFO][4817] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.2/32] ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-rzsxf" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.940081 containerd[1738]: 2026-01-23 23:58:14.898 [INFO][4817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali722a86487f2 ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-rzsxf" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.940081 containerd[1738]: 2026-01-23 23:58:14.908 [INFO][4817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-rzsxf" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.940081 containerd[1738]: 2026-01-23 23:58:14.909 [INFO][4817] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-rzsxf" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"840af08f-d1f8-4fdc-a3d8-e0970397bca1", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce", Pod:"coredns-66bc5c9577-rzsxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali722a86487f2", MAC:"3a:9a:b2:4c:dd:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:14.940300 containerd[1738]: 2026-01-23 23:58:14.935 [INFO][4817] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce" Namespace="kube-system" Pod="coredns-66bc5c9577-rzsxf" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:14.975504 containerd[1738]: time="2026-01-23T23:58:14.974084107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:14.975504 containerd[1738]: time="2026-01-23T23:58:14.974218667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:14.977819 containerd[1738]: time="2026-01-23T23:58:14.975417104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:14.978271 containerd[1738]: time="2026-01-23T23:58:14.978203657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:15.003614 systemd[1]: Started cri-containerd-7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce.scope - libcontainer container 7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce. Jan 23 23:58:15.022589 systemd-networkd[1363]: cali72948936d35: Link UP Jan 23 23:58:15.029433 systemd-networkd[1363]: cali72948936d35: Gained carrier Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.858 [INFO][4843] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0 calico-apiserver-7ddd4879dc- calico-apiserver 0adf700e-5270-411b-82bf-1b013a95c851 927 0 2026-01-23 23:57:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ddd4879dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-31deed6810 calico-apiserver-7ddd4879dc-2hqft eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72948936d35 [] [] }} ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2hqft" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.861 [INFO][4843] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2hqft" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.913 [INFO][4862] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" HandleID="k8s-pod-network.982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.914 [INFO][4862] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" HandleID="k8s-pod-network.982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-31deed6810", "pod":"calico-apiserver-7ddd4879dc-2hqft", "timestamp":"2026-01-23 23:58:14.913898187 +0000 UTC"}, Hostname:"ci-4081.3.6-n-31deed6810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.914 [INFO][4862] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.914 [INFO][4862] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.914 [INFO][4862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-31deed6810' Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.934 [INFO][4862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.949 [INFO][4862] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.962 [INFO][4862] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.968 [INFO][4862] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.971 [INFO][4862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.971 [INFO][4862] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.976 [INFO][4862] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.983 [INFO][4862] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.995 [INFO][4862] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.3/26] block=192.168.70.0/26 handle="k8s-pod-network.982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.998 [INFO][4862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.3/26] handle="k8s-pod-network.982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.998 [INFO][4862] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:15.062866 containerd[1738]: 2026-01-23 23:58:14.998 [INFO][4862] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.3/26] IPv6=[] ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" HandleID="k8s-pod-network.982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:15.064135 containerd[1738]: 2026-01-23 23:58:15.006 [INFO][4843] cni-plugin/k8s.go 418: Populated endpoint ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2hqft" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0", GenerateName:"calico-apiserver-7ddd4879dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0adf700e-5270-411b-82bf-1b013a95c851", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4879dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"", Pod:"calico-apiserver-7ddd4879dc-2hqft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72948936d35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:15.064135 containerd[1738]: 2026-01-23 23:58:15.006 [INFO][4843] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.3/32] ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2hqft" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:15.064135 containerd[1738]: 2026-01-23 23:58:15.006 [INFO][4843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72948936d35 ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2hqft" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:15.064135 containerd[1738]: 2026-01-23 23:58:15.033 [INFO][4843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2hqft" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:15.064135 containerd[1738]: 2026-01-23 23:58:15.034 [INFO][4843] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2hqft" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0", GenerateName:"calico-apiserver-7ddd4879dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0adf700e-5270-411b-82bf-1b013a95c851", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4879dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b", Pod:"calico-apiserver-7ddd4879dc-2hqft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72948936d35", MAC:"e6:fd:1e:f3:5a:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:15.064135 containerd[1738]: 2026-01-23 23:58:15.060 [INFO][4843] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2hqft" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:15.094622 containerd[1738]: time="2026-01-23T23:58:15.094582189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rzsxf,Uid:840af08f-d1f8-4fdc-a3d8-e0970397bca1,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce\"" Jan 23 23:58:15.111240 containerd[1738]: time="2026-01-23T23:58:15.111198265Z" level=info msg="CreateContainer within sandbox \"7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:58:15.116612 systemd-networkd[1363]: cali580df5705fe: Link UP Jan 23 23:58:15.119123 systemd-networkd[1363]: cali580df5705fe: Gained carrier Jan 23 23:58:15.120051 containerd[1738]: time="2026-01-23T23:58:15.118612885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:15.120272 containerd[1738]: time="2026-01-23T23:58:15.120194441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:15.122014 containerd[1738]: time="2026-01-23T23:58:15.121954837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:15.122207 containerd[1738]: time="2026-01-23T23:58:15.122073356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:14.860 [INFO][4830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0 csi-node-driver- calico-system 9e81a44d-05db-4251-91b5-ae7d0d2169e6 926 0 2026-01-23 23:57:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-31deed6810 csi-node-driver-kzqw2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali580df5705fe [] [] }} ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Namespace="calico-system" Pod="csi-node-driver-kzqw2" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:14.860 [INFO][4830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Namespace="calico-system" Pod="csi-node-driver-kzqw2" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:14.929 [INFO][4867] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" HandleID="k8s-pod-network.10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:14.929 [INFO][4867] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" HandleID="k8s-pod-network.10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-31deed6810", "pod":"csi-node-driver-kzqw2", "timestamp":"2026-01-23 23:58:14.929279466 +0000 UTC"}, Hostname:"ci-4081.3.6-n-31deed6810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:14.929 [INFO][4867] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:14.998 [INFO][4867] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:14.998 [INFO][4867] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-31deed6810' Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.034 [INFO][4867] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.049 [INFO][4867] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.067 [INFO][4867] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.070 [INFO][4867] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.076 [INFO][4867] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.077 [INFO][4867] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.081 [INFO][4867] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94 Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.092 [INFO][4867] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.105 [INFO][4867] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.4/26] block=192.168.70.0/26 handle="k8s-pod-network.10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.105 [INFO][4867] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.4/26] handle="k8s-pod-network.10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.106 [INFO][4867] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:15.152431 containerd[1738]: 2026-01-23 23:58:15.106 [INFO][4867] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.4/26] IPv6=[] ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" HandleID="k8s-pod-network.10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:15.153063 containerd[1738]: 2026-01-23 23:58:15.111 [INFO][4830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Namespace="calico-system" Pod="csi-node-driver-kzqw2" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e81a44d-05db-4251-91b5-ae7d0d2169e6", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"", Pod:"csi-node-driver-kzqw2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali580df5705fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:15.153063 containerd[1738]: 2026-01-23 23:58:15.111 [INFO][4830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.4/32] ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Namespace="calico-system" Pod="csi-node-driver-kzqw2" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:15.153063 containerd[1738]: 2026-01-23 23:58:15.112 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali580df5705fe ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Namespace="calico-system" Pod="csi-node-driver-kzqw2" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:15.153063 containerd[1738]: 2026-01-23 23:58:15.121 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Namespace="calico-system" Pod="csi-node-driver-kzqw2" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:15.153063 containerd[1738]: 2026-01-23 23:58:15.122 [INFO][4830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Namespace="calico-system" Pod="csi-node-driver-kzqw2" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e81a44d-05db-4251-91b5-ae7d0d2169e6", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94", Pod:"csi-node-driver-kzqw2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali580df5705fe", MAC:"92:a3:d9:14:1d:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:15.153063 containerd[1738]: 2026-01-23 23:58:15.146 [INFO][4830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94" Namespace="calico-system" Pod="csi-node-driver-kzqw2" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:15.154639 systemd[1]: Started cri-containerd-982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b.scope - libcontainer container 982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b. Jan 23 23:58:15.172178 containerd[1738]: time="2026-01-23T23:58:15.172135744Z" level=info msg="CreateContainer within sandbox \"7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf40cf5e03880ef10668fabd46bce34f797f9f6235766c8a7918782f4946ef5d\"" Jan 23 23:58:15.174510 containerd[1738]: time="2026-01-23T23:58:15.174097779Z" level=info msg="StartContainer for \"bf40cf5e03880ef10668fabd46bce34f797f9f6235766c8a7918782f4946ef5d\"" Jan 23 23:58:15.191356 containerd[1738]: time="2026-01-23T23:58:15.191102494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:15.191356 containerd[1738]: time="2026-01-23T23:58:15.191155454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:15.191356 containerd[1738]: time="2026-01-23T23:58:15.191178534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:15.191356 containerd[1738]: time="2026-01-23T23:58:15.191249413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:15.210817 containerd[1738]: time="2026-01-23T23:58:15.210771082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4879dc-2hqft,Uid:0adf700e-5270-411b-82bf-1b013a95c851,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b\"" Jan 23 23:58:15.215237 containerd[1738]: time="2026-01-23T23:58:15.214131633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:15.231634 systemd[1]: Started cri-containerd-10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94.scope - libcontainer container 10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94. Jan 23 23:58:15.234714 systemd[1]: Started cri-containerd-bf40cf5e03880ef10668fabd46bce34f797f9f6235766c8a7918782f4946ef5d.scope - libcontainer container bf40cf5e03880ef10668fabd46bce34f797f9f6235766c8a7918782f4946ef5d. Jan 23 23:58:15.269199 containerd[1738]: time="2026-01-23T23:58:15.269144527Z" level=info msg="StartContainer for \"bf40cf5e03880ef10668fabd46bce34f797f9f6235766c8a7918782f4946ef5d\" returns successfully" Jan 23 23:58:15.291274 containerd[1738]: time="2026-01-23T23:58:15.291133749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kzqw2,Uid:9e81a44d-05db-4251-91b5-ae7d0d2169e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94\"" Jan 23 23:58:15.527708 containerd[1738]: time="2026-01-23T23:58:15.527590364Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:15.532103 containerd[1738]: time="2026-01-23T23:58:15.532056112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:15.532189 containerd[1738]: time="2026-01-23T23:58:15.532163152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:15.532552 kubelet[3215]: E0123 23:58:15.532351 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:15.532552 kubelet[3215]: E0123 23:58:15.532395 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:15.532939 kubelet[3215]: E0123 23:58:15.532536 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7ddd4879dc-2hqft_calico-apiserver(0adf700e-5270-411b-82bf-1b013a95c851): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:15.532970 containerd[1738]: time="2026-01-23T23:58:15.532690951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:58:15.532997 kubelet[3215]: E0123 23:58:15.532930 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:58:15.673057 kubelet[3215]: E0123 23:58:15.673012 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:58:15.685453 kubelet[3215]: I0123 23:58:15.685111 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rzsxf" podStartSLOduration=43.685095228 podStartE2EDuration="43.685095228s" podCreationTimestamp="2026-01-23 23:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:58:15.682970074 +0000 UTC m=+49.386616833" watchObservedRunningTime="2026-01-23 23:58:15.685095228 +0000 UTC m=+49.388741987" Jan 23 23:58:15.778071 containerd[1738]: time="2026-01-23T23:58:15.777954222Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:15.781063 containerd[1738]: time="2026-01-23T23:58:15.781019454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:58:15.781169 containerd[1738]: time="2026-01-23T23:58:15.781135454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:58:15.781624 kubelet[3215]: E0123 23:58:15.781344 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:15.781624 kubelet[3215]: E0123 23:58:15.781393 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:15.781624 kubelet[3215]: E0123 23:58:15.781484 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:15.783070 containerd[1738]: time="2026-01-23T23:58:15.782832930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:58:16.055489 containerd[1738]: time="2026-01-23T23:58:16.055364609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:16.058333 containerd[1738]: time="2026-01-23T23:58:16.058285121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:58:16.058538 containerd[1738]: time="2026-01-23T23:58:16.058405081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:58:16.058876 kubelet[3215]: E0123 23:58:16.058592 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:16.058876 kubelet[3215]: E0123 23:58:16.058644 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:16.058876 kubelet[3215]: E0123 23:58:16.058720 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:16.060417 kubelet[3215]: E0123 23:58:16.058760 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:58:16.266668 systemd-networkd[1363]: cali722a86487f2: Gained IPv6LL Jan 23 23:58:16.433693 containerd[1738]: time="2026-01-23T23:58:16.433566329Z" level=info msg="StopPodSandbox for \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\"" Jan 23 23:58:16.434902 containerd[1738]: time="2026-01-23T23:58:16.433586409Z" level=info msg="StopPodSandbox for \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\"" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.506 [INFO][5090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.507 [INFO][5090] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" iface="eth0" netns="/var/run/netns/cni-4a2adb58-df76-8dfd-e788-cb3512e0646b" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.507 [INFO][5090] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" iface="eth0" netns="/var/run/netns/cni-4a2adb58-df76-8dfd-e788-cb3512e0646b" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.508 [INFO][5090] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" iface="eth0" netns="/var/run/netns/cni-4a2adb58-df76-8dfd-e788-cb3512e0646b" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.508 [INFO][5090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.508 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.536 [INFO][5104] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" HandleID="k8s-pod-network.4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.536 [INFO][5104] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.536 [INFO][5104] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.547 [WARNING][5104] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" HandleID="k8s-pod-network.4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.547 [INFO][5104] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" HandleID="k8s-pod-network.4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.548 [INFO][5104] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:16.554656 containerd[1738]: 2026-01-23 23:58:16.550 [INFO][5090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:16.557537 containerd[1738]: time="2026-01-23T23:58:16.555544287Z" level=info msg="TearDown network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\" successfully" Jan 23 23:58:16.557537 containerd[1738]: time="2026-01-23T23:58:16.555573887Z" level=info msg="StopPodSandbox for \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\" returns successfully" Jan 23 23:58:16.557994 systemd[1]: run-netns-cni\x2d4a2adb58\x2ddf76\x2d8dfd\x2de788\x2dcb3512e0646b.mount: Deactivated successfully. Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.505 [INFO][5089] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.505 [INFO][5089] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" iface="eth0" netns="/var/run/netns/cni-399e3a97-25a5-478b-e592-af8566bc2c61" Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.508 [INFO][5089] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" iface="eth0" netns="/var/run/netns/cni-399e3a97-25a5-478b-e592-af8566bc2c61" Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.508 [INFO][5089] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" iface="eth0" netns="/var/run/netns/cni-399e3a97-25a5-478b-e592-af8566bc2c61" Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.508 [INFO][5089] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.508 [INFO][5089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.547 [INFO][5105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" HandleID="k8s-pod-network.0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.547 [INFO][5105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.548 [INFO][5105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.563 [WARNING][5105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" HandleID="k8s-pod-network.0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.563 [INFO][5105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" HandleID="k8s-pod-network.0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.564 [INFO][5105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:16.568111 containerd[1738]: 2026-01-23 23:58:16.566 [INFO][5089] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:16.570639 containerd[1738]: time="2026-01-23T23:58:16.570583207Z" level=info msg="TearDown network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\" successfully" Jan 23 23:58:16.570639 containerd[1738]: time="2026-01-23T23:58:16.570632567Z" level=info msg="StopPodSandbox for \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\" returns successfully" Jan 23 23:58:16.571998 systemd[1]: run-netns-cni\x2d399e3a97\x2d25a5\x2d478b\x2de592\x2daf8566bc2c61.mount: Deactivated successfully. Jan 23 23:58:16.575601 containerd[1738]: time="2026-01-23T23:58:16.575561154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nfkkc,Uid:7b65147e-f60b-40c9-8c5d-17265b54435d,Namespace:kube-system,Attempt:1,}" Jan 23 23:58:16.618112 containerd[1738]: time="2026-01-23T23:58:16.618045282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wtg4h,Uid:4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f,Namespace:calico-system,Attempt:1,}" Jan 23 23:58:16.692241 kubelet[3215]: E0123 23:58:16.692166 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:58:16.693818 kubelet[3215]: E0123 23:58:16.693443 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:58:16.714647 systemd-networkd[1363]: cali580df5705fe: Gained IPv6LL Jan 23 23:58:16.888916 systemd-networkd[1363]: calif4dd51655d4: Link UP Jan 23 23:58:16.890806 systemd-networkd[1363]: calif4dd51655d4: Gained carrier Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.773 [INFO][5118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0 goldmane-7c778bb748- calico-system 4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f 971 0 2026-01-23 23:57:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-31deed6810 goldmane-7c778bb748-wtg4h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif4dd51655d4 [] [] }} ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Namespace="calico-system" Pod="goldmane-7c778bb748-wtg4h" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.774 [INFO][5118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Namespace="calico-system" Pod="goldmane-7c778bb748-wtg4h" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.823 [INFO][5150] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" HandleID="k8s-pod-network.7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.823 [INFO][5150] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" HandleID="k8s-pod-network.7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3030), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-31deed6810", "pod":"goldmane-7c778bb748-wtg4h", "timestamp":"2026-01-23 23:58:16.82302334 +0000 UTC"}, Hostname:"ci-4081.3.6-n-31deed6810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.823 [INFO][5150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.823 [INFO][5150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.823 [INFO][5150] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-31deed6810' Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.842 [INFO][5150] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.848 [INFO][5150] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.854 [INFO][5150] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.856 [INFO][5150] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.858 [INFO][5150] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.858 [INFO][5150] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.860 [INFO][5150] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.868 [INFO][5150] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.877 [INFO][5150] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.5/26] block=192.168.70.0/26 handle="k8s-pod-network.7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.878 [INFO][5150] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.5/26] handle="k8s-pod-network.7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.878 [INFO][5150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:16.922314 containerd[1738]: 2026-01-23 23:58:16.878 [INFO][5150] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.5/26] IPv6=[] ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" HandleID="k8s-pod-network.7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.922934 containerd[1738]: 2026-01-23 23:58:16.883 [INFO][5118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Namespace="calico-system" Pod="goldmane-7c778bb748-wtg4h" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"", Pod:"goldmane-7c778bb748-wtg4h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4dd51655d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:16.922934 containerd[1738]: 2026-01-23 23:58:16.883 [INFO][5118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.5/32] ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Namespace="calico-system" Pod="goldmane-7c778bb748-wtg4h" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.922934 containerd[1738]: 2026-01-23 23:58:16.883 [INFO][5118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4dd51655d4 ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Namespace="calico-system" Pod="goldmane-7c778bb748-wtg4h" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.922934 containerd[1738]: 2026-01-23 23:58:16.890 [INFO][5118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Namespace="calico-system" Pod="goldmane-7c778bb748-wtg4h" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.922934 containerd[1738]: 2026-01-23 23:58:16.891 [INFO][5118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Namespace="calico-system" Pod="goldmane-7c778bb748-wtg4h" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c", Pod:"goldmane-7c778bb748-wtg4h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4dd51655d4", MAC:"e2:fc:8e:5c:12:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:16.922934 containerd[1738]: 2026-01-23 23:58:16.917 [INFO][5118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c" Namespace="calico-system" Pod="goldmane-7c778bb748-wtg4h" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:16.967897 containerd[1738]: time="2026-01-23T23:58:16.966733800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:16.967897 containerd[1738]: time="2026-01-23T23:58:16.966825600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:16.967897 containerd[1738]: time="2026-01-23T23:58:16.966842840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:16.967897 containerd[1738]: time="2026-01-23T23:58:16.966958519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:17.007639 systemd[1]: Started cri-containerd-7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c.scope - libcontainer container 7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c. Jan 23 23:58:17.022005 systemd-networkd[1363]: calid081c29594e: Link UP Jan 23 23:58:17.024457 systemd-networkd[1363]: calid081c29594e: Gained carrier Jan 23 23:58:17.034674 systemd-networkd[1363]: cali72948936d35: Gained IPv6LL Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.772 [INFO][5117] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0 coredns-66bc5c9577- kube-system 7b65147e-f60b-40c9-8c5d-17265b54435d 972 0 2026-01-23 23:57:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-31deed6810 coredns-66bc5c9577-nfkkc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid081c29594e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Namespace="kube-system" Pod="coredns-66bc5c9577-nfkkc" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.773 [INFO][5117] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Namespace="kube-system" Pod="coredns-66bc5c9577-nfkkc" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.827 [INFO][5145] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" HandleID="k8s-pod-network.813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.827 [INFO][5145] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" HandleID="k8s-pod-network.813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1da0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-31deed6810", "pod":"coredns-66bc5c9577-nfkkc", "timestamp":"2026-01-23 23:58:16.827503088 +0000 UTC"}, Hostname:"ci-4081.3.6-n-31deed6810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.827 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.878 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.878 [INFO][5145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-31deed6810' Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.946 [INFO][5145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.955 [INFO][5145] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.962 [INFO][5145] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.964 [INFO][5145] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.974 [INFO][5145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.974 [INFO][5145] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.980 [INFO][5145] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04 Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:16.997 [INFO][5145] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:17.011 [INFO][5145] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.6/26] block=192.168.70.0/26 handle="k8s-pod-network.813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:17.011 [INFO][5145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.6/26] handle="k8s-pod-network.813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:17.011 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:17.058922 containerd[1738]: 2026-01-23 23:58:17.011 [INFO][5145] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.6/26] IPv6=[] ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" HandleID="k8s-pod-network.813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:17.059815 containerd[1738]: 2026-01-23 23:58:17.015 [INFO][5117] cni-plugin/k8s.go 418: Populated endpoint ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Namespace="kube-system" Pod="coredns-66bc5c9577-nfkkc" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7b65147e-f60b-40c9-8c5d-17265b54435d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"", Pod:"coredns-66bc5c9577-nfkkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid081c29594e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:17.059815 containerd[1738]: 2026-01-23 23:58:17.016 [INFO][5117] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.6/32] ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Namespace="kube-system" Pod="coredns-66bc5c9577-nfkkc" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:17.059815 containerd[1738]: 2026-01-23 23:58:17.016 [INFO][5117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid081c29594e ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Namespace="kube-system" Pod="coredns-66bc5c9577-nfkkc" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:17.059815 containerd[1738]: 2026-01-23 23:58:17.025 [INFO][5117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Namespace="kube-system" Pod="coredns-66bc5c9577-nfkkc" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:17.059815 containerd[1738]: 2026-01-23 23:58:17.026 [INFO][5117] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Namespace="kube-system" Pod="coredns-66bc5c9577-nfkkc" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7b65147e-f60b-40c9-8c5d-17265b54435d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04", Pod:"coredns-66bc5c9577-nfkkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid081c29594e", MAC:"56:49:b6:a4:74:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:17.059989 containerd[1738]: 2026-01-23 23:58:17.055 [INFO][5117] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04" Namespace="kube-system" Pod="coredns-66bc5c9577-nfkkc" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:17.093750 containerd[1738]: time="2026-01-23T23:58:17.093615504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:17.093750 containerd[1738]: time="2026-01-23T23:58:17.093692304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:17.093750 containerd[1738]: time="2026-01-23T23:58:17.093712704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:17.095215 containerd[1738]: time="2026-01-23T23:58:17.093803304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:17.131831 containerd[1738]: time="2026-01-23T23:58:17.131297405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wtg4h,Uid:4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f,Namespace:calico-system,Attempt:1,} returns sandbox id \"7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c\"" Jan 23 23:58:17.140443 containerd[1738]: time="2026-01-23T23:58:17.139807142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:58:17.141307 systemd[1]: Started cri-containerd-813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04.scope - libcontainer container 813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04. Jan 23 23:58:17.268177 containerd[1738]: time="2026-01-23T23:58:17.268137243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nfkkc,Uid:7b65147e-f60b-40c9-8c5d-17265b54435d,Namespace:kube-system,Attempt:1,} returns sandbox id \"813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04\"" Jan 23 23:58:17.285367 containerd[1738]: time="2026-01-23T23:58:17.285320078Z" level=info msg="CreateContainer within sandbox \"813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:58:17.313020 containerd[1738]: time="2026-01-23T23:58:17.312652085Z" level=info msg="CreateContainer within sandbox \"813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71063ca68463cae5ad2aa2a560b4fcc183b95287b5ecf7e09222e11ec53c60a3\"" Jan 23 23:58:17.315464 containerd[1738]: time="2026-01-23T23:58:17.313737563Z" level=info msg="StartContainer for \"71063ca68463cae5ad2aa2a560b4fcc183b95287b5ecf7e09222e11ec53c60a3\"" Jan 23 23:58:17.350609 systemd[1]: Started cri-containerd-71063ca68463cae5ad2aa2a560b4fcc183b95287b5ecf7e09222e11ec53c60a3.scope - libcontainer container 71063ca68463cae5ad2aa2a560b4fcc183b95287b5ecf7e09222e11ec53c60a3. Jan 23 23:58:17.387060 containerd[1738]: time="2026-01-23T23:58:17.387012409Z" level=info msg="StartContainer for \"71063ca68463cae5ad2aa2a560b4fcc183b95287b5ecf7e09222e11ec53c60a3\" returns successfully" Jan 23 23:58:17.433759 containerd[1738]: time="2026-01-23T23:58:17.433118327Z" level=info msg="StopPodSandbox for \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\"" Jan 23 23:58:17.434308 containerd[1738]: time="2026-01-23T23:58:17.434271404Z" level=info msg="StopPodSandbox for \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\"" Jan 23 23:58:17.436059 containerd[1738]: time="2026-01-23T23:58:17.436018639Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:17.441118 containerd[1738]: time="2026-01-23T23:58:17.439481310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:58:17.443103 containerd[1738]: time="2026-01-23T23:58:17.439903189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:17.443392 kubelet[3215]: E0123 23:58:17.443256 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:17.443808 kubelet[3215]: E0123 23:58:17.443769 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:17.444377 kubelet[3215]: E0123 23:58:17.443878 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wtg4h_calico-system(4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:17.444377 kubelet[3215]: E0123 23:58:17.443910 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.536 [INFO][5314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.536 [INFO][5314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" iface="eth0" netns="/var/run/netns/cni-c91239ac-9bbf-fa31-c300-e098de3db579" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.537 [INFO][5314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" iface="eth0" netns="/var/run/netns/cni-c91239ac-9bbf-fa31-c300-e098de3db579" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.537 [INFO][5314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" iface="eth0" netns="/var/run/netns/cni-c91239ac-9bbf-fa31-c300-e098de3db579" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.537 [INFO][5314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.537 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.583 [INFO][5328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" HandleID="k8s-pod-network.802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.583 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.584 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.597 [WARNING][5328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" HandleID="k8s-pod-network.802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.597 [INFO][5328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" HandleID="k8s-pod-network.802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.598 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:17.605675 containerd[1738]: 2026-01-23 23:58:17.603 [INFO][5314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:17.607129 containerd[1738]: time="2026-01-23T23:58:17.606575148Z" level=info msg="TearDown network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\" successfully" Jan 23 23:58:17.607129 containerd[1738]: time="2026-01-23T23:58:17.606605668Z" level=info msg="StopPodSandbox for \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\" returns successfully" Jan 23 23:58:17.614248 containerd[1738]: time="2026-01-23T23:58:17.613103691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4879dc-2rp8t,Uid:2db427fa-9e25-4d91-9748-361f655acfc7,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.543 [INFO][5315] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.543 [INFO][5315] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" iface="eth0" netns="/var/run/netns/cni-85460903-22c4-a624-0251-ed3d0ac3bf3b" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.544 [INFO][5315] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" iface="eth0" netns="/var/run/netns/cni-85460903-22c4-a624-0251-ed3d0ac3bf3b" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.545 [INFO][5315] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" iface="eth0" netns="/var/run/netns/cni-85460903-22c4-a624-0251-ed3d0ac3bf3b" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.545 [INFO][5315] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.545 [INFO][5315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.588 [INFO][5333] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" HandleID="k8s-pod-network.589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.588 [INFO][5333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.598 [INFO][5333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.611 [WARNING][5333] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" HandleID="k8s-pod-network.589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.611 [INFO][5333] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" HandleID="k8s-pod-network.589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.613 [INFO][5333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:17.619738 containerd[1738]: 2026-01-23 23:58:17.617 [INFO][5315] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:17.620108 containerd[1738]: time="2026-01-23T23:58:17.620077513Z" level=info msg="TearDown network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\" successfully" Jan 23 23:58:17.620108 containerd[1738]: time="2026-01-23T23:58:17.620103153Z" level=info msg="StopPodSandbox for \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\" returns successfully" Jan 23 23:58:17.624786 containerd[1738]: time="2026-01-23T23:58:17.624733420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dbb8cd949-trksp,Uid:1a7bcaec-bb3f-491a-bd0f-d443085a7496,Namespace:calico-system,Attempt:1,}" Jan 23 23:58:17.657089 systemd[1]: run-netns-cni\x2d85460903\x2d22c4\x2da624\x2d0251\x2ded3d0ac3bf3b.mount: Deactivated successfully. Jan 23 23:58:17.657176 systemd[1]: run-netns-cni\x2dc91239ac\x2d9bbf\x2dfa31\x2dc300\x2de098de3db579.mount: Deactivated successfully. Jan 23 23:58:17.701284 kubelet[3215]: E0123 23:58:17.697985 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:58:17.958971 systemd-networkd[1363]: cali55f7a96ae23: Link UP Jan 23 23:58:17.962541 systemd-networkd[1363]: cali55f7a96ae23: Gained carrier Jan 23 23:58:17.995432 kubelet[3215]: I0123 23:58:17.994765 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nfkkc" podStartSLOduration=45.994746498 podStartE2EDuration="45.994746498s" podCreationTimestamp="2026-01-23 23:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:58:17.785889995 +0000 UTC m=+51.489536754" watchObservedRunningTime="2026-01-23 23:58:17.994746498 +0000 UTC m=+51.698393257" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.768 [INFO][5352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0 calico-kube-controllers-6dbb8cd949- calico-system 1a7bcaec-bb3f-491a-bd0f-d443085a7496 1000 0 2026-01-23 23:57:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dbb8cd949 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-31deed6810 calico-kube-controllers-6dbb8cd949-trksp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali55f7a96ae23 [] [] }} ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Namespace="calico-system" Pod="calico-kube-controllers-6dbb8cd949-trksp" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.769 [INFO][5352] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Namespace="calico-system" Pod="calico-kube-controllers-6dbb8cd949-trksp" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.850 [INFO][5368] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" HandleID="k8s-pod-network.2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.851 [INFO][5368] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" HandleID="k8s-pod-network.2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-31deed6810", "pod":"calico-kube-controllers-6dbb8cd949-trksp", "timestamp":"2026-01-23 23:58:17.850821637 +0000 UTC"}, Hostname:"ci-4081.3.6-n-31deed6810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.851 [INFO][5368] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.851 [INFO][5368] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.851 [INFO][5368] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-31deed6810' Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.864 [INFO][5368] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.872 [INFO][5368] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.891 [INFO][5368] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.901 [INFO][5368] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.908 [INFO][5368] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.908 [INFO][5368] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.909 [INFO][5368] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1 Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.916 [INFO][5368] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.951 [INFO][5368] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.7/26] block=192.168.70.0/26 handle="k8s-pod-network.2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.951 [INFO][5368] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.7/26] handle="k8s-pod-network.2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.951 [INFO][5368] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:17.996554 containerd[1738]: 2026-01-23 23:58:17.951 [INFO][5368] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.7/26] IPv6=[] ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" HandleID="k8s-pod-network.2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:17.997024 containerd[1738]: 2026-01-23 23:58:17.953 [INFO][5352] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Namespace="calico-system" Pod="calico-kube-controllers-6dbb8cd949-trksp" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0", GenerateName:"calico-kube-controllers-6dbb8cd949-", Namespace:"calico-system", SelfLink:"", UID:"1a7bcaec-bb3f-491a-bd0f-d443085a7496", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dbb8cd949", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"", Pod:"calico-kube-controllers-6dbb8cd949-trksp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55f7a96ae23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:17.997024 containerd[1738]: 2026-01-23 23:58:17.953 [INFO][5352] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.7/32] ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Namespace="calico-system" Pod="calico-kube-controllers-6dbb8cd949-trksp" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:17.997024 containerd[1738]: 2026-01-23 23:58:17.953 [INFO][5352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55f7a96ae23 ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Namespace="calico-system" Pod="calico-kube-controllers-6dbb8cd949-trksp" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:17.997024 containerd[1738]: 2026-01-23 23:58:17.959 [INFO][5352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Namespace="calico-system" Pod="calico-kube-controllers-6dbb8cd949-trksp" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:17.997024 containerd[1738]: 2026-01-23 23:58:17.959 [INFO][5352] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Namespace="calico-system" Pod="calico-kube-controllers-6dbb8cd949-trksp" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0", GenerateName:"calico-kube-controllers-6dbb8cd949-", Namespace:"calico-system", SelfLink:"", UID:"1a7bcaec-bb3f-491a-bd0f-d443085a7496", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dbb8cd949", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1", Pod:"calico-kube-controllers-6dbb8cd949-trksp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55f7a96ae23", MAC:"9a:26:62:40:44:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:17.997024 containerd[1738]: 2026-01-23 23:58:17.993 [INFO][5352] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1" Namespace="calico-system" Pod="calico-kube-controllers-6dbb8cd949-trksp" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:18.028214 containerd[1738]: time="2026-01-23T23:58:18.028094300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:18.028214 containerd[1738]: time="2026-01-23T23:58:18.028161940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:18.028214 containerd[1738]: time="2026-01-23T23:58:18.028177460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:18.028961 containerd[1738]: time="2026-01-23T23:58:18.028722620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:18.066230 systemd[1]: Started cri-containerd-2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1.scope - libcontainer container 2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1. Jan 23 23:58:18.072254 systemd-networkd[1363]: cali8ace103f748: Link UP Jan 23 23:58:18.076220 systemd-networkd[1363]: cali8ace103f748: Gained carrier Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:17.815 [INFO][5343] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0 calico-apiserver-7ddd4879dc- calico-apiserver 2db427fa-9e25-4d91-9748-361f655acfc7 999 0 2026-01-23 23:57:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ddd4879dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-31deed6810 calico-apiserver-7ddd4879dc-2rp8t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ace103f748 [] [] }} ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2rp8t" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:17.816 [INFO][5343] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2rp8t" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:17.869 [INFO][5373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" HandleID="k8s-pod-network.18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:17.870 [INFO][5373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" HandleID="k8s-pod-network.18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-31deed6810", "pod":"calico-apiserver-7ddd4879dc-2rp8t", "timestamp":"2026-01-23 23:58:17.869837701 +0000 UTC"}, Hostname:"ci-4081.3.6-n-31deed6810", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:17.870 [INFO][5373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:17.951 [INFO][5373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:17.951 [INFO][5373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-31deed6810' Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:17.999 [INFO][5373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.007 [INFO][5373] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.015 [INFO][5373] ipam/ipam.go 511: Trying affinity for 192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.020 [INFO][5373] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.024 [INFO][5373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.0/26 host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.024 [INFO][5373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.0/26 handle="k8s-pod-network.18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.030 [INFO][5373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9 Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.044 [INFO][5373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.0/26 handle="k8s-pod-network.18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.060 [INFO][5373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.8/26] block=192.168.70.0/26 handle="k8s-pod-network.18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.060 [INFO][5373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.8/26] handle="k8s-pod-network.18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" host="ci-4081.3.6-n-31deed6810" Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.060 [INFO][5373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:18.099853 containerd[1738]: 2026-01-23 23:58:18.060 [INFO][5373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.8/26] IPv6=[] ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" HandleID="k8s-pod-network.18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:18.100402 containerd[1738]: 2026-01-23 23:58:18.064 [INFO][5343] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2rp8t" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0", GenerateName:"calico-apiserver-7ddd4879dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2db427fa-9e25-4d91-9748-361f655acfc7", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4879dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"", Pod:"calico-apiserver-7ddd4879dc-2rp8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ace103f748", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:18.100402 containerd[1738]: 2026-01-23 23:58:18.064 [INFO][5343] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.8/32] ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2rp8t" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:18.100402 containerd[1738]: 2026-01-23 23:58:18.064 [INFO][5343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ace103f748 ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2rp8t" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:18.100402 containerd[1738]: 2026-01-23 23:58:18.076 [INFO][5343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2rp8t" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:18.100402 containerd[1738]: 2026-01-23 23:58:18.079 [INFO][5343] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2rp8t" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0", GenerateName:"calico-apiserver-7ddd4879dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2db427fa-9e25-4d91-9748-361f655acfc7", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4879dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9", Pod:"calico-apiserver-7ddd4879dc-2rp8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ace103f748", MAC:"5e:dc:b2:18:9a:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:18.100402 containerd[1738]: 2026-01-23 23:58:18.093 [INFO][5343] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4879dc-2rp8t" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:18.135820 containerd[1738]: time="2026-01-23T23:58:18.134924754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:18.135820 containerd[1738]: time="2026-01-23T23:58:18.134980874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:18.135820 containerd[1738]: time="2026-01-23T23:58:18.134996234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:18.135820 containerd[1738]: time="2026-01-23T23:58:18.135085834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:18.154699 containerd[1738]: time="2026-01-23T23:58:18.154389578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dbb8cd949-trksp,Uid:1a7bcaec-bb3f-491a-bd0f-d443085a7496,Namespace:calico-system,Attempt:1,} returns sandbox id \"2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1\"" Jan 23 23:58:18.157024 containerd[1738]: time="2026-01-23T23:58:18.156996462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:58:18.174602 systemd[1]: Started cri-containerd-18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9.scope - libcontainer container 18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9. Jan 23 23:58:18.254839 containerd[1738]: time="2026-01-23T23:58:18.254802145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4879dc-2rp8t,Uid:2db427fa-9e25-4d91-9748-361f655acfc7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9\"" Jan 23 23:58:18.314536 systemd-networkd[1363]: calif4dd51655d4: Gained IPv6LL Jan 23 23:58:18.378586 systemd-networkd[1363]: calid081c29594e: Gained IPv6LL Jan 23 23:58:18.432324 containerd[1738]: time="2026-01-23T23:58:18.432060207Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:18.434739 containerd[1738]: time="2026-01-23T23:58:18.434615130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:58:18.434739 containerd[1738]: time="2026-01-23T23:58:18.434681851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:18.435584 kubelet[3215]: E0123 23:58:18.434991 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:18.435584 kubelet[3215]: E0123 23:58:18.435035 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:18.435584 kubelet[3215]: E0123 23:58:18.435171 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dbb8cd949-trksp_calico-system(1a7bcaec-bb3f-491a-bd0f-d443085a7496): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:18.435584 kubelet[3215]: E0123 23:58:18.435204 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:58:18.436157 containerd[1738]: time="2026-01-23T23:58:18.435670612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:18.685936 containerd[1738]: time="2026-01-23T23:58:18.685684526Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:18.689983 containerd[1738]: time="2026-01-23T23:58:18.689200770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:18.689983 containerd[1738]: time="2026-01-23T23:58:18.689314331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:18.690127 kubelet[3215]: E0123 23:58:18.689590 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:18.690127 kubelet[3215]: E0123 23:58:18.689630 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:18.690127 kubelet[3215]: E0123 23:58:18.689694 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7ddd4879dc-2rp8t_calico-apiserver(2db427fa-9e25-4d91-9748-361f655acfc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:18.690127 kubelet[3215]: E0123 23:58:18.689725 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:58:18.710444 kubelet[3215]: E0123 23:58:18.709775 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:58:18.713168 kubelet[3215]: E0123 23:58:18.712675 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:58:18.713168 kubelet[3215]: E0123 23:58:18.712790 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:58:19.594666 systemd-networkd[1363]: cali55f7a96ae23: Gained IPv6LL Jan 23 23:58:19.658542 systemd-networkd[1363]: cali8ace103f748: Gained IPv6LL Jan 23 23:58:19.718622 kubelet[3215]: E0123 23:58:19.718101 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:58:19.720126 kubelet[3215]: E0123 23:58:19.719206 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:58:24.433898 containerd[1738]: time="2026-01-23T23:58:24.433344520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:58:24.712734 containerd[1738]: time="2026-01-23T23:58:24.712623016Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:24.715622 containerd[1738]: time="2026-01-23T23:58:24.715495608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:58:24.715622 containerd[1738]: time="2026-01-23T23:58:24.715570488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:58:24.716364 kubelet[3215]: E0123 23:58:24.715856 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:24.716364 kubelet[3215]: E0123 23:58:24.715898 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:24.716364 kubelet[3215]: E0123 23:58:24.715981 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-58bcdfdb7b-m76fr_calico-system(c69a3a9c-9be4-419b-bd7b-2c7c74ce300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:24.717507 containerd[1738]: time="2026-01-23T23:58:24.717301603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:58:24.994438 containerd[1738]: time="2026-01-23T23:58:24.994311866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:24.997300 containerd[1738]: time="2026-01-23T23:58:24.997238538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:58:24.997443 containerd[1738]: time="2026-01-23T23:58:24.997353538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:24.997640 kubelet[3215]: E0123 23:58:24.997603 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:24.997705 kubelet[3215]: E0123 23:58:24.997649 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:24.998089 kubelet[3215]: E0123 23:58:24.997723 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-58bcdfdb7b-m76fr_calico-system(c69a3a9c-9be4-419b-bd7b-2c7c74ce300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:24.998089 kubelet[3215]: E0123 23:58:24.997763 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 23 23:58:26.427037 containerd[1738]: time="2026-01-23T23:58:26.426968749Z" level=info msg="StopPodSandbox for \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\"" Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.476 [WARNING][5500] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0", GenerateName:"calico-apiserver-7ddd4879dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0adf700e-5270-411b-82bf-1b013a95c851", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4879dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b", Pod:"calico-apiserver-7ddd4879dc-2hqft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72948936d35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.476 [INFO][5500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.476 [INFO][5500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" iface="eth0" netns="" Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.476 [INFO][5500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.476 [INFO][5500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.504 [INFO][5508] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" HandleID="k8s-pod-network.11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.506 [INFO][5508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.506 [INFO][5508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.519 [WARNING][5508] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" HandleID="k8s-pod-network.11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.520 [INFO][5508] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" HandleID="k8s-pod-network.11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.523 [INFO][5508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:26.527998 containerd[1738]: 2026-01-23 23:58:26.526 [INFO][5500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:26.528457 containerd[1738]: time="2026-01-23T23:58:26.528036965Z" level=info msg="TearDown network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\" successfully" Jan 23 23:58:26.528457 containerd[1738]: time="2026-01-23T23:58:26.528061325Z" level=info msg="StopPodSandbox for \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\" returns successfully" Jan 23 23:58:26.528633 containerd[1738]: time="2026-01-23T23:58:26.528602285Z" level=info msg="RemovePodSandbox for \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\"" Jan 23 23:58:26.528668 containerd[1738]: time="2026-01-23T23:58:26.528635525Z" level=info msg="Forcibly stopping sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\"" Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.568 [WARNING][5522] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0", GenerateName:"calico-apiserver-7ddd4879dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0adf700e-5270-411b-82bf-1b013a95c851", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4879dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"982254321fc84529dd4965222887e89c030f011a64bd2e38e81eaa77207c1b5b", Pod:"calico-apiserver-7ddd4879dc-2hqft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72948936d35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.568 [INFO][5522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.568 [INFO][5522] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" iface="eth0" netns="" Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.568 [INFO][5522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.568 [INFO][5522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.591 [INFO][5529] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" HandleID="k8s-pod-network.11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.591 [INFO][5529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.591 [INFO][5529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.600 [WARNING][5529] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" HandleID="k8s-pod-network.11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.600 [INFO][5529] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" HandleID="k8s-pod-network.11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2hqft-eth0" Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.605 [INFO][5529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:26.608587 containerd[1738]: 2026-01-23 23:58:26.606 [INFO][5522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4" Jan 23 23:58:26.609152 containerd[1738]: time="2026-01-23T23:58:26.608747707Z" level=info msg="TearDown network for sandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\" successfully" Jan 23 23:58:26.619897 containerd[1738]: time="2026-01-23T23:58:26.619672464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:26.619897 containerd[1738]: time="2026-01-23T23:58:26.619739104Z" level=info msg="RemovePodSandbox \"11ccaa761e0786db3f6469db2073f0b8ab74bb9b3a46a14ed97191ac4ffd34a4\" returns successfully" Jan 23 23:58:26.620528 containerd[1738]: time="2026-01-23T23:58:26.620249144Z" level=info msg="StopPodSandbox for \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\"" Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.655 [WARNING][5543] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e81a44d-05db-4251-91b5-ae7d0d2169e6", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94", Pod:"csi-node-driver-kzqw2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali580df5705fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.655 [INFO][5543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.655 [INFO][5543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" iface="eth0" netns="" Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.655 [INFO][5543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.655 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.672 [INFO][5550] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" HandleID="k8s-pod-network.cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.673 [INFO][5550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.673 [INFO][5550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.681 [WARNING][5550] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" HandleID="k8s-pod-network.cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.681 [INFO][5550] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" HandleID="k8s-pod-network.cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.683 [INFO][5550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:26.686500 containerd[1738]: 2026-01-23 23:58:26.684 [INFO][5543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:26.687693 containerd[1738]: time="2026-01-23T23:58:26.687524968Z" level=info msg="TearDown network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\" successfully" Jan 23 23:58:26.687693 containerd[1738]: time="2026-01-23T23:58:26.687556728Z" level=info msg="StopPodSandbox for \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\" returns successfully" Jan 23 23:58:26.689031 containerd[1738]: time="2026-01-23T23:58:26.688188768Z" level=info msg="RemovePodSandbox for \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\"" Jan 23 23:58:26.689031 containerd[1738]: time="2026-01-23T23:58:26.688222248Z" level=info msg="Forcibly stopping sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\"" Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.722 [WARNING][5564] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e81a44d-05db-4251-91b5-ae7d0d2169e6", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"10d78be8c5eb8e8e21b0290a3bd6745755d2fbc32327650a24abd067394cde94", Pod:"csi-node-driver-kzqw2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali580df5705fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.722 [INFO][5564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.722 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" iface="eth0" netns="" Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.722 [INFO][5564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.722 [INFO][5564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.744 [INFO][5571] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" HandleID="k8s-pod-network.cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.745 [INFO][5571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.745 [INFO][5571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.753 [WARNING][5571] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" HandleID="k8s-pod-network.cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.753 [INFO][5571] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" HandleID="k8s-pod-network.cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Workload="ci--4081.3.6--n--31deed6810-k8s-csi--node--driver--kzqw2-eth0" Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.756 [INFO][5571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:26.761862 containerd[1738]: 2026-01-23 23:58:26.759 [INFO][5564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad" Jan 23 23:58:26.762793 containerd[1738]: time="2026-01-23T23:58:26.762327911Z" level=info msg="TearDown network for sandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\" successfully" Jan 23 23:58:26.781578 containerd[1738]: time="2026-01-23T23:58:26.781429427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:26.781578 containerd[1738]: time="2026-01-23T23:58:26.781496907Z" level=info msg="RemovePodSandbox \"cf805c790ddcae613551ac8216176334ffd941b3e6cb26446229541054e70bad\" returns successfully" Jan 23 23:58:26.782323 containerd[1738]: time="2026-01-23T23:58:26.782078987Z" level=info msg="StopPodSandbox for \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\"" Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.831 [WARNING][5587] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.832 [INFO][5587] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.832 [INFO][5587] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" iface="eth0" netns="" Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.832 [INFO][5587] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.832 [INFO][5587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.858 [INFO][5594] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" HandleID="k8s-pod-network.5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.858 [INFO][5594] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.858 [INFO][5594] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.868 [WARNING][5594] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" HandleID="k8s-pod-network.5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.868 [INFO][5594] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" HandleID="k8s-pod-network.5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.871 [INFO][5594] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:26.875821 containerd[1738]: 2026-01-23 23:58:26.874 [INFO][5587] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:26.876652 containerd[1738]: time="2026-01-23T23:58:26.876021885Z" level=info msg="TearDown network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\" successfully" Jan 23 23:58:26.876652 containerd[1738]: time="2026-01-23T23:58:26.876048885Z" level=info msg="StopPodSandbox for \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\" returns successfully" Jan 23 23:58:26.876904 containerd[1738]: time="2026-01-23T23:58:26.876879525Z" level=info msg="RemovePodSandbox for \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\"" Jan 23 23:58:26.876963 containerd[1738]: time="2026-01-23T23:58:26.876911365Z" level=info msg="Forcibly stopping sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\"" Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.919 [WARNING][5608] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" WorkloadEndpoint="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.919 [INFO][5608] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.919 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" iface="eth0" netns="" Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.919 [INFO][5608] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.920 [INFO][5608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.938 [INFO][5615] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" HandleID="k8s-pod-network.5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.938 [INFO][5615] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.939 [INFO][5615] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.949 [WARNING][5615] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" HandleID="k8s-pod-network.5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.951 [INFO][5615] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" HandleID="k8s-pod-network.5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Workload="ci--4081.3.6--n--31deed6810-k8s-whisker--64578dd766--k2fwc-eth0" Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.953 [INFO][5615] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:26.956489 containerd[1738]: 2026-01-23 23:58:26.954 [INFO][5608] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d" Jan 23 23:58:26.956849 containerd[1738]: time="2026-01-23T23:58:26.956529986Z" level=info msg="TearDown network for sandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\" successfully" Jan 23 23:58:26.964896 containerd[1738]: time="2026-01-23T23:58:26.964837264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:26.965085 containerd[1738]: time="2026-01-23T23:58:26.964904344Z" level=info msg="RemovePodSandbox \"5a88eb3f980b41f92c75ebbf71ba9c3ab945d889d0542904b6200f51e7057f8d\" returns successfully" Jan 23 23:58:26.965780 containerd[1738]: time="2026-01-23T23:58:26.965550184Z" level=info msg="StopPodSandbox for \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\"" Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.004 [WARNING][5629] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0", GenerateName:"calico-kube-controllers-6dbb8cd949-", Namespace:"calico-system", SelfLink:"", UID:"1a7bcaec-bb3f-491a-bd0f-d443085a7496", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dbb8cd949", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1", Pod:"calico-kube-controllers-6dbb8cd949-trksp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55f7a96ae23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.005 [INFO][5629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.005 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" iface="eth0" netns="" Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.005 [INFO][5629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.005 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.031 [INFO][5636] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" HandleID="k8s-pod-network.589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.031 [INFO][5636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.031 [INFO][5636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.043 [WARNING][5636] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" HandleID="k8s-pod-network.589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.043 [INFO][5636] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" HandleID="k8s-pod-network.589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.044 [INFO][5636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:27.048793 containerd[1738]: 2026-01-23 23:58:27.046 [INFO][5629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:27.049748 containerd[1738]: time="2026-01-23T23:58:27.049049805Z" level=info msg="TearDown network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\" successfully" Jan 23 23:58:27.049748 containerd[1738]: time="2026-01-23T23:58:27.049094845Z" level=info msg="StopPodSandbox for \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\" returns successfully" Jan 23 23:58:27.050227 containerd[1738]: time="2026-01-23T23:58:27.049988765Z" level=info msg="RemovePodSandbox for \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\"" Jan 23 23:58:27.050227 containerd[1738]: time="2026-01-23T23:58:27.050017245Z" level=info msg="Forcibly stopping sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\"" Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.095 [WARNING][5650] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0", GenerateName:"calico-kube-controllers-6dbb8cd949-", Namespace:"calico-system", SelfLink:"", UID:"1a7bcaec-bb3f-491a-bd0f-d443085a7496", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dbb8cd949", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"2b572dcbbb08650542b2cd686fde9f0db8960f4893b6644ad76e24d3206001a1", Pod:"calico-kube-controllers-6dbb8cd949-trksp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55f7a96ae23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.095 [INFO][5650] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.095 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" iface="eth0" netns="" Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.095 [INFO][5650] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.095 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.138 [INFO][5657] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" HandleID="k8s-pod-network.589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.140 [INFO][5657] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.140 [INFO][5657] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.156 [WARNING][5657] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" HandleID="k8s-pod-network.589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.156 [INFO][5657] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" HandleID="k8s-pod-network.589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--kube--controllers--6dbb8cd949--trksp-eth0" Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.157 [INFO][5657] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:27.163355 containerd[1738]: 2026-01-23 23:58:27.161 [INFO][5650] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de" Jan 23 23:58:27.166017 containerd[1738]: time="2026-01-23T23:58:27.165580258Z" level=info msg="TearDown network for sandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\" successfully" Jan 23 23:58:27.172146 containerd[1738]: time="2026-01-23T23:58:27.172100577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:27.172422 containerd[1738]: time="2026-01-23T23:58:27.172321376Z" level=info msg="RemovePodSandbox \"589e94b4de834a9d4aa84381c5eb4ef355038504762a9e6666e3ab565aacf3de\" returns successfully" Jan 23 23:58:27.173044 containerd[1738]: time="2026-01-23T23:58:27.172799536Z" level=info msg="StopPodSandbox for \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\"" Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.230 [WARNING][5671] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0", GenerateName:"calico-apiserver-7ddd4879dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2db427fa-9e25-4d91-9748-361f655acfc7", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4879dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9", Pod:"calico-apiserver-7ddd4879dc-2rp8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ace103f748", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.230 [INFO][5671] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.230 [INFO][5671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" iface="eth0" netns="" Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.230 [INFO][5671] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.230 [INFO][5671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.269 [INFO][5678] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" HandleID="k8s-pod-network.802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.269 [INFO][5678] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.269 [INFO][5678] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.278 [WARNING][5678] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" HandleID="k8s-pod-network.802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.278 [INFO][5678] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" HandleID="k8s-pod-network.802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.279 [INFO][5678] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:27.284555 containerd[1738]: 2026-01-23 23:58:27.281 [INFO][5671] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:27.285726 containerd[1738]: time="2026-01-23T23:58:27.284995350Z" level=info msg="TearDown network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\" successfully" Jan 23 23:58:27.285726 containerd[1738]: time="2026-01-23T23:58:27.285023870Z" level=info msg="StopPodSandbox for \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\" returns successfully" Jan 23 23:58:27.286584 containerd[1738]: time="2026-01-23T23:58:27.286304270Z" level=info msg="RemovePodSandbox for \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\"" Jan 23 23:58:27.286584 containerd[1738]: time="2026-01-23T23:58:27.286339110Z" level=info msg="Forcibly stopping sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\"" Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.321 [WARNING][5693] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0", GenerateName:"calico-apiserver-7ddd4879dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2db427fa-9e25-4d91-9748-361f655acfc7", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4879dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"18ebf04b37b31e4e48549835cc66daee3beab72b17dde4d1f96adc2990b894b9", Pod:"calico-apiserver-7ddd4879dc-2rp8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ace103f748", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.321 [INFO][5693] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.321 [INFO][5693] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" iface="eth0" netns="" Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.321 [INFO][5693] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.321 [INFO][5693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.347 [INFO][5700] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" HandleID="k8s-pod-network.802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.348 [INFO][5700] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.348 [INFO][5700] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.358 [WARNING][5700] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" HandleID="k8s-pod-network.802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.358 [INFO][5700] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" HandleID="k8s-pod-network.802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Workload="ci--4081.3.6--n--31deed6810-k8s-calico--apiserver--7ddd4879dc--2rp8t-eth0" Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.359 [INFO][5700] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:27.364308 containerd[1738]: 2026-01-23 23:58:27.361 [INFO][5693] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95" Jan 23 23:58:27.364308 containerd[1738]: time="2026-01-23T23:58:27.364234292Z" level=info msg="TearDown network for sandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\" successfully" Jan 23 23:58:27.375023 containerd[1738]: time="2026-01-23T23:58:27.374978730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:27.377541 containerd[1738]: time="2026-01-23T23:58:27.375228570Z" level=info msg="RemovePodSandbox \"802be9d8144fba2881160c766031eeee28b4a2b653e9b193f371053a8d830c95\" returns successfully" Jan 23 23:58:27.378238 containerd[1738]: time="2026-01-23T23:58:27.377955929Z" level=info msg="StopPodSandbox for \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\"" Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.415 [WARNING][5714] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7b65147e-f60b-40c9-8c5d-17265b54435d", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04", Pod:"coredns-66bc5c9577-nfkkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid081c29594e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.415 [INFO][5714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.415 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" iface="eth0" netns="" Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.415 [INFO][5714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.415 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.439 [INFO][5721] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" HandleID="k8s-pod-network.4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.439 [INFO][5721] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.439 [INFO][5721] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.456 [WARNING][5721] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" HandleID="k8s-pod-network.4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.456 [INFO][5721] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" HandleID="k8s-pod-network.4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.457 [INFO][5721] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:27.462380 containerd[1738]: 2026-01-23 23:58:27.459 [INFO][5714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:27.463456 containerd[1738]: time="2026-01-23T23:58:27.463038869Z" level=info msg="TearDown network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\" successfully" Jan 23 23:58:27.463456 containerd[1738]: time="2026-01-23T23:58:27.463074349Z" level=info msg="StopPodSandbox for \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\" returns successfully" Jan 23 23:58:27.464304 containerd[1738]: time="2026-01-23T23:58:27.463577269Z" level=info msg="RemovePodSandbox for \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\"" Jan 23 23:58:27.464304 containerd[1738]: time="2026-01-23T23:58:27.463607389Z" level=info msg="Forcibly stopping sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\"" Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.505 [WARNING][5736] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7b65147e-f60b-40c9-8c5d-17265b54435d", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"813225df1b5862e63741794f571b8535f9a3159d1688d3579a9222463291ba04", Pod:"coredns-66bc5c9577-nfkkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid081c29594e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.505 [INFO][5736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.505 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" iface="eth0" netns="" Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.505 [INFO][5736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.505 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.528 [INFO][5743] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" HandleID="k8s-pod-network.4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.529 [INFO][5743] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.529 [INFO][5743] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.539 [WARNING][5743] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" HandleID="k8s-pod-network.4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.540 [INFO][5743] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" HandleID="k8s-pod-network.4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--nfkkc-eth0" Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.541 [INFO][5743] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:27.545304 containerd[1738]: 2026-01-23 23:58:27.543 [INFO][5736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba" Jan 23 23:58:27.546505 containerd[1738]: time="2026-01-23T23:58:27.545285530Z" level=info msg="TearDown network for sandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\" successfully" Jan 23 23:58:27.553919 containerd[1738]: time="2026-01-23T23:58:27.553852168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:27.553919 containerd[1738]: time="2026-01-23T23:58:27.553921408Z" level=info msg="RemovePodSandbox \"4d96d3bdd1ec3219d0c086ef07956939c78c54917293e3839568fee1d0c590ba\" returns successfully" Jan 23 23:58:27.555993 containerd[1738]: time="2026-01-23T23:58:27.554670168Z" level=info msg="StopPodSandbox for \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\"" Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.597 [WARNING][5757] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"840af08f-d1f8-4fdc-a3d8-e0970397bca1", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce", Pod:"coredns-66bc5c9577-rzsxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali722a86487f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.598 [INFO][5757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.598 [INFO][5757] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" iface="eth0" netns="" Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.598 [INFO][5757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.598 [INFO][5757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.628 [INFO][5764] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" HandleID="k8s-pod-network.7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.628 [INFO][5764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.628 [INFO][5764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.639 [WARNING][5764] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" HandleID="k8s-pod-network.7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.639 [INFO][5764] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" HandleID="k8s-pod-network.7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.643 [INFO][5764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:27.650342 containerd[1738]: 2026-01-23 23:58:27.648 [INFO][5757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:27.650981 containerd[1738]: time="2026-01-23T23:58:27.650620546Z" level=info msg="TearDown network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\" successfully" Jan 23 23:58:27.650981 containerd[1738]: time="2026-01-23T23:58:27.650647466Z" level=info msg="StopPodSandbox for \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\" returns successfully" Jan 23 23:58:27.651501 containerd[1738]: time="2026-01-23T23:58:27.651222506Z" level=info msg="RemovePodSandbox for \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\"" Jan 23 23:58:27.651501 containerd[1738]: time="2026-01-23T23:58:27.651252826Z" level=info msg="Forcibly stopping sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\"" Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.697 [WARNING][5778] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"840af08f-d1f8-4fdc-a3d8-e0970397bca1", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"7ac3a76769d2acbddc01f6e239a0b5dce9065ff9be1e4985c1332a52a2a8e4ce", Pod:"coredns-66bc5c9577-rzsxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali722a86487f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.697 [INFO][5778] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.697 [INFO][5778] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" iface="eth0" netns="" Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.697 [INFO][5778] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.697 [INFO][5778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.721 [INFO][5785] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" HandleID="k8s-pod-network.7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.721 [INFO][5785] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.721 [INFO][5785] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.733 [WARNING][5785] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" HandleID="k8s-pod-network.7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.733 [INFO][5785] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" HandleID="k8s-pod-network.7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Workload="ci--4081.3.6--n--31deed6810-k8s-coredns--66bc5c9577--rzsxf-eth0" Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.736 [INFO][5785] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:27.742930 containerd[1738]: 2026-01-23 23:58:27.740 [INFO][5778] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd" Jan 23 23:58:27.743345 containerd[1738]: time="2026-01-23T23:58:27.742984438Z" level=info msg="TearDown network for sandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\" successfully" Jan 23 23:58:27.749902 containerd[1738]: time="2026-01-23T23:58:27.749460461Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:27.749902 containerd[1738]: time="2026-01-23T23:58:27.749537541Z" level=info msg="RemovePodSandbox \"7e4cf528353b7bda1b81478384c1bf71e03a69049f853e5a9ffa2c539abcc3bd\" returns successfully" Jan 23 23:58:27.750085 containerd[1738]: time="2026-01-23T23:58:27.750033820Z" level=info msg="StopPodSandbox for \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\"" Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.835 [WARNING][5799] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c", Pod:"goldmane-7c778bb748-wtg4h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4dd51655d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.836 [INFO][5799] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.836 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" iface="eth0" netns="" Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.836 [INFO][5799] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.836 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.871 [INFO][5806] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" HandleID="k8s-pod-network.0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.872 [INFO][5806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.873 [INFO][5806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.898 [WARNING][5806] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" HandleID="k8s-pod-network.0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.898 [INFO][5806] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" HandleID="k8s-pod-network.0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.901 [INFO][5806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:27.906772 containerd[1738]: 2026-01-23 23:58:27.904 [INFO][5799] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:27.906772 containerd[1738]: time="2026-01-23T23:58:27.906753775Z" level=info msg="TearDown network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\" successfully" Jan 23 23:58:27.907154 containerd[1738]: time="2026-01-23T23:58:27.906784615Z" level=info msg="StopPodSandbox for \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\" returns successfully" Jan 23 23:58:27.908432 containerd[1738]: time="2026-01-23T23:58:27.908384931Z" level=info msg="RemovePodSandbox for \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\"" Jan 23 23:58:27.908772 containerd[1738]: time="2026-01-23T23:58:27.908537851Z" level=info msg="Forcibly stopping sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\"" Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:27.953 [WARNING][5820] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-31deed6810", ContainerID:"7961aea318f5ac49838721861f3e280faee882153b32902d83e94ac8ff480b4c", Pod:"goldmane-7c778bb748-wtg4h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif4dd51655d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:27.953 [INFO][5820] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:27.953 [INFO][5820] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" iface="eth0" netns="" Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:27.953 [INFO][5820] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:27.953 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:27.986 [INFO][5827] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" HandleID="k8s-pod-network.0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:27.986 [INFO][5827] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:27.986 [INFO][5827] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:28.000 [WARNING][5827] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" HandleID="k8s-pod-network.0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:28.000 [INFO][5827] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" HandleID="k8s-pod-network.0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Workload="ci--4081.3.6--n--31deed6810-k8s-goldmane--7c778bb748--wtg4h-eth0" Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:28.004 [INFO][5827] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:28.010127 containerd[1738]: 2026-01-23 23:58:28.007 [INFO][5820] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473" Jan 23 23:58:28.010575 containerd[1738]: time="2026-01-23T23:58:28.010178228Z" level=info msg="TearDown network for sandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\" successfully" Jan 23 23:58:28.017021 containerd[1738]: time="2026-01-23T23:58:28.016970691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:28.017146 containerd[1738]: time="2026-01-23T23:58:28.017035891Z" level=info msg="RemovePodSandbox \"0c2a56eed8fb0b8a2beee022f9e784a5cfe5de7d27635ce468d55c13f5e38473\" returns successfully" Jan 23 23:58:29.433896 containerd[1738]: time="2026-01-23T23:58:29.433616514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:58:29.672435 containerd[1738]: time="2026-01-23T23:58:29.672299738Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:29.675065 containerd[1738]: time="2026-01-23T23:58:29.675030811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:58:29.675166 containerd[1738]: time="2026-01-23T23:58:29.675131410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:58:29.675316 kubelet[3215]: E0123 23:58:29.675279 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:29.675608 kubelet[3215]: E0123 23:58:29.675325 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:29.675608 kubelet[3215]: E0123 23:58:29.675398 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:29.676539 containerd[1738]: time="2026-01-23T23:58:29.676285847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:58:29.947949 containerd[1738]: time="2026-01-23T23:58:29.947904786Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:29.950497 containerd[1738]: time="2026-01-23T23:58:29.950424820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:58:29.950497 containerd[1738]: time="2026-01-23T23:58:29.950469940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:58:29.950905 kubelet[3215]: E0123 23:58:29.950629 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:29.950905 kubelet[3215]: E0123 23:58:29.950675 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:29.950905 kubelet[3215]: E0123 23:58:29.950748 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:29.951043 kubelet[3215]: E0123 23:58:29.950789 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:58:30.434505 containerd[1738]: time="2026-01-23T23:58:30.434175411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:58:30.678636 containerd[1738]: time="2026-01-23T23:58:30.678458661Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:30.681189 containerd[1738]: time="2026-01-23T23:58:30.681096854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:58:30.681429 containerd[1738]: time="2026-01-23T23:58:30.681295333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:30.681660 kubelet[3215]: E0123 23:58:30.681613 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:30.682047 kubelet[3215]: E0123 23:58:30.681662 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:30.682047 kubelet[3215]: E0123 23:58:30.681735 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dbb8cd949-trksp_calico-system(1a7bcaec-bb3f-491a-bd0f-d443085a7496): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:30.682047 kubelet[3215]: E0123 23:58:30.681765 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:58:32.433845 containerd[1738]: time="2026-01-23T23:58:32.433672010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:32.689370 containerd[1738]: time="2026-01-23T23:58:32.689331750Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:32.692335 containerd[1738]: time="2026-01-23T23:58:32.692265102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:32.692335 containerd[1738]: time="2026-01-23T23:58:32.692302222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:32.692775 kubelet[3215]: E0123 23:58:32.692507 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:32.692775 kubelet[3215]: E0123 23:58:32.692559 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:32.692775 kubelet[3215]: E0123 23:58:32.692638 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7ddd4879dc-2hqft_calico-apiserver(0adf700e-5270-411b-82bf-1b013a95c851): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:32.692775 kubelet[3215]: E0123 23:58:32.692697 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:58:33.431163 containerd[1738]: time="2026-01-23T23:58:33.431116315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:58:33.693657 containerd[1738]: time="2026-01-23T23:58:33.693600654Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:33.696426 containerd[1738]: time="2026-01-23T23:58:33.696367248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:58:33.696532 containerd[1738]: time="2026-01-23T23:58:33.696481008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:33.697153 kubelet[3215]: E0123 23:58:33.696679 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:33.697153 kubelet[3215]: E0123 23:58:33.696719 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:33.697153 kubelet[3215]: E0123 23:58:33.696780 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wtg4h_calico-system(4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:33.697153 kubelet[3215]: E0123 23:58:33.696808 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:58:34.440738 containerd[1738]: time="2026-01-23T23:58:34.440609836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:34.691852 containerd[1738]: time="2026-01-23T23:58:34.691358813Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:34.693837 containerd[1738]: time="2026-01-23T23:58:34.693707007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:34.693837 containerd[1738]: time="2026-01-23T23:58:34.693810567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:34.694295 kubelet[3215]: E0123 23:58:34.694255 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:34.694351 kubelet[3215]: E0123 23:58:34.694303 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:34.694447 kubelet[3215]: E0123 23:58:34.694372 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7ddd4879dc-2rp8t_calico-apiserver(2db427fa-9e25-4d91-9748-361f655acfc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:34.694447 kubelet[3215]: E0123 23:58:34.694406 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:58:36.435028 kubelet[3215]: E0123 23:58:36.434972 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 23 23:58:41.432202 kubelet[3215]: E0123 23:58:41.431188 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:58:41.432740 kubelet[3215]: E0123 23:58:41.432477 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:58:43.431921 kubelet[3215]: E0123 23:58:43.431866 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:58:47.432071 kubelet[3215]: E0123 23:58:47.431970 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:58:48.433053 kubelet[3215]: E0123 23:58:48.432631 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:58:50.433032 containerd[1738]: time="2026-01-23T23:58:50.432698236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:58:50.677914 containerd[1738]: time="2026-01-23T23:58:50.677734951Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:50.680324 containerd[1738]: time="2026-01-23T23:58:50.680227145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:58:50.680324 containerd[1738]: time="2026-01-23T23:58:50.680298985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:58:50.680613 kubelet[3215]: E0123 23:58:50.680541 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:50.680983 kubelet[3215]: E0123 23:58:50.680614 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:50.680983 kubelet[3215]: E0123 23:58:50.680733 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-58bcdfdb7b-m76fr_calico-system(c69a3a9c-9be4-419b-bd7b-2c7c74ce300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:50.682516 containerd[1738]: time="2026-01-23T23:58:50.682482660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:58:50.930583 containerd[1738]: time="2026-01-23T23:58:50.930487248Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:50.933420 containerd[1738]: time="2026-01-23T23:58:50.933373641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:58:50.933500 containerd[1738]: time="2026-01-23T23:58:50.933479841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:50.933692 kubelet[3215]: E0123 23:58:50.933657 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:50.933756 kubelet[3215]: E0123 23:58:50.933712 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:50.933833 kubelet[3215]: E0123 23:58:50.933801 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-58bcdfdb7b-m76fr_calico-system(c69a3a9c-9be4-419b-bd7b-2c7c74ce300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:50.933901 kubelet[3215]: E0123 23:58:50.933846 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 23 23:58:52.433608 containerd[1738]: time="2026-01-23T23:58:52.432595901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:58:52.738767 containerd[1738]: time="2026-01-23T23:58:52.738569035Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:52.741195 containerd[1738]: time="2026-01-23T23:58:52.741080270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:58:52.741195 containerd[1738]: time="2026-01-23T23:58:52.741153869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:58:52.741344 kubelet[3215]: E0123 23:58:52.741300 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:52.741344 kubelet[3215]: E0123 23:58:52.741340 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:52.742860 kubelet[3215]: E0123 23:58:52.741518 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:52.742925 containerd[1738]: time="2026-01-23T23:58:52.742598386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:58:52.996705 containerd[1738]: time="2026-01-23T23:58:52.996565760Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:53.000595 containerd[1738]: time="2026-01-23T23:58:53.000501471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:58:53.000733 containerd[1738]: time="2026-01-23T23:58:53.000566991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:53.000941 kubelet[3215]: E0123 23:58:53.000905 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:53.001002 kubelet[3215]: E0123 23:58:53.000960 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:53.002437 kubelet[3215]: E0123 23:58:53.001106 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dbb8cd949-trksp_calico-system(1a7bcaec-bb3f-491a-bd0f-d443085a7496): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:53.002437 kubelet[3215]: E0123 23:58:53.001157 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:58:53.002621 containerd[1738]: time="2026-01-23T23:58:53.001367989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:58:53.244034 containerd[1738]: time="2026-01-23T23:58:53.243843189Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:53.246441 containerd[1738]: time="2026-01-23T23:58:53.246338784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:58:53.246441 containerd[1738]: time="2026-01-23T23:58:53.246405463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:58:53.246582 kubelet[3215]: E0123 23:58:53.246548 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:53.246623 kubelet[3215]: E0123 23:58:53.246590 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:53.246686 kubelet[3215]: E0123 23:58:53.246653 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:53.246815 kubelet[3215]: E0123 23:58:53.246696 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:58:56.433088 containerd[1738]: time="2026-01-23T23:58:56.433014790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:56.693980 containerd[1738]: time="2026-01-23T23:58:56.693932348Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:56.696487 containerd[1738]: time="2026-01-23T23:58:56.696439582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:56.696581 containerd[1738]: time="2026-01-23T23:58:56.696547062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:56.697329 kubelet[3215]: E0123 23:58:56.696766 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:56.697329 kubelet[3215]: E0123 23:58:56.696816 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:56.697329 kubelet[3215]: E0123 23:58:56.696887 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7ddd4879dc-2hqft_calico-apiserver(0adf700e-5270-411b-82bf-1b013a95c851): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:56.697329 kubelet[3215]: E0123 23:58:56.696918 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:59:01.433049 containerd[1738]: time="2026-01-23T23:59:01.432734950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:59:01.668238 containerd[1738]: time="2026-01-23T23:59:01.668133004Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:01.671096 containerd[1738]: time="2026-01-23T23:59:01.671041839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:59:01.671172 containerd[1738]: time="2026-01-23T23:59:01.671153838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:01.671510 kubelet[3215]: E0123 23:59:01.671303 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:59:01.671510 kubelet[3215]: E0123 23:59:01.671348 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:59:01.671510 kubelet[3215]: E0123 23:59:01.671447 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wtg4h_calico-system(4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:01.671510 kubelet[3215]: E0123 23:59:01.671476 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:59:02.437096 containerd[1738]: time="2026-01-23T23:59:02.436843843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:02.686693 containerd[1738]: time="2026-01-23T23:59:02.686521829Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:02.689316 containerd[1738]: time="2026-01-23T23:59:02.689182704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:02.689316 containerd[1738]: time="2026-01-23T23:59:02.689285463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:02.690520 kubelet[3215]: E0123 23:59:02.690479 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:02.690922 kubelet[3215]: E0123 23:59:02.690530 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:02.690922 kubelet[3215]: E0123 23:59:02.690606 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7ddd4879dc-2rp8t_calico-apiserver(2db427fa-9e25-4d91-9748-361f655acfc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:02.690922 kubelet[3215]: E0123 23:59:02.690636 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:59:03.433039 kubelet[3215]: E0123 23:59:03.432986 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 23 23:59:05.435476 kubelet[3215]: E0123 23:59:05.435231 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:59:06.433130 kubelet[3215]: E0123 23:59:06.432645 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:59:08.432877 kubelet[3215]: E0123 23:59:08.432804 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:59:09.908238 systemd[1]: Started sshd@7-10.200.20.33:22-10.200.16.10:42496.service - OpenSSH per-connection server daemon (10.200.16.10:42496). Jan 23 23:59:10.328816 sshd[5887]: Accepted publickey for core from 10.200.16.10 port 42496 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:10.333773 sshd[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:10.339507 systemd-logind[1709]: New session 10 of user core. Jan 23 23:59:10.343608 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:59:10.725159 sshd[5887]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:10.729390 systemd[1]: sshd@7-10.200.20.33:22-10.200.16.10:42496.service: Deactivated successfully. Jan 23 23:59:10.732232 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:59:10.734829 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:59:10.737670 systemd-logind[1709]: Removed session 10. Jan 23 23:59:15.811022 systemd[1]: Started sshd@8-10.200.20.33:22-10.200.16.10:42504.service - OpenSSH per-connection server daemon (10.200.16.10:42504). Jan 23 23:59:16.229984 sshd[5924]: Accepted publickey for core from 10.200.16.10 port 42504 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:16.232889 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:16.243061 systemd-logind[1709]: New session 11 of user core. Jan 23 23:59:16.245637 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:59:16.433376 kubelet[3215]: E0123 23:59:16.433155 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:59:16.434877 kubelet[3215]: E0123 23:59:16.434708 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:59:16.623548 sshd[5924]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:16.627803 systemd[1]: sshd@8-10.200.20.33:22-10.200.16.10:42504.service: Deactivated successfully. Jan 23 23:59:16.630325 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:59:16.634637 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:59:16.636546 systemd-logind[1709]: Removed session 11. Jan 23 23:59:17.432966 kubelet[3215]: E0123 23:59:17.432574 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:59:18.434458 kubelet[3215]: E0123 23:59:18.432362 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:59:18.435136 kubelet[3215]: E0123 23:59:18.435095 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 23 23:59:19.432040 kubelet[3215]: E0123 23:59:19.431866 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:59:21.723136 systemd[1]: Started sshd@9-10.200.20.33:22-10.200.16.10:52098.service - OpenSSH per-connection server daemon (10.200.16.10:52098). Jan 23 23:59:22.176549 sshd[5939]: Accepted publickey for core from 10.200.16.10 port 52098 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:22.178429 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:22.185656 systemd-logind[1709]: New session 12 of user core. Jan 23 23:59:22.188562 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:59:22.650685 sshd[5939]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:22.654567 systemd[1]: sshd@9-10.200.20.33:22-10.200.16.10:52098.service: Deactivated successfully. Jan 23 23:59:22.657853 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:59:22.661773 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:59:22.663865 systemd-logind[1709]: Removed session 12. Jan 23 23:59:22.737749 systemd[1]: Started sshd@10-10.200.20.33:22-10.200.16.10:52110.service - OpenSSH per-connection server daemon (10.200.16.10:52110). Jan 23 23:59:23.190707 sshd[5955]: Accepted publickey for core from 10.200.16.10 port 52110 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:23.192681 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:23.199149 systemd-logind[1709]: New session 13 of user core. Jan 23 23:59:23.206615 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:59:23.665135 sshd[5955]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:23.671934 systemd[1]: sshd@10-10.200.20.33:22-10.200.16.10:52110.service: Deactivated successfully. Jan 23 23:59:23.678898 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:59:23.682171 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:59:23.685571 systemd-logind[1709]: Removed session 13. Jan 23 23:59:23.751705 systemd[1]: Started sshd@11-10.200.20.33:22-10.200.16.10:52124.service - OpenSSH per-connection server daemon (10.200.16.10:52124). Jan 23 23:59:24.204305 sshd[5965]: Accepted publickey for core from 10.200.16.10 port 52124 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:24.205718 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:24.210030 systemd-logind[1709]: New session 14 of user core. Jan 23 23:59:24.213585 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:59:24.613905 sshd[5965]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:24.618266 systemd[1]: sshd@11-10.200.20.33:22-10.200.16.10:52124.service: Deactivated successfully. Jan 23 23:59:24.622050 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:59:24.624768 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:59:24.626035 systemd-logind[1709]: Removed session 14. Jan 23 23:59:29.432744 kubelet[3215]: E0123 23:59:29.432670 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:59:29.434009 kubelet[3215]: E0123 23:59:29.433955 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:59:29.708601 systemd[1]: Started sshd@12-10.200.20.33:22-10.200.16.10:54980.service - OpenSSH per-connection server daemon (10.200.16.10:54980). Jan 23 23:59:30.176279 sshd[5983]: Accepted publickey for core from 10.200.16.10 port 54980 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:30.178678 sshd[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:30.183665 systemd-logind[1709]: New session 15 of user core. Jan 23 23:59:30.187854 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:59:30.433577 kubelet[3215]: E0123 23:59:30.432922 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:59:30.590319 sshd[5983]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:30.594125 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:59:30.597097 systemd[1]: sshd@12-10.200.20.33:22-10.200.16.10:54980.service: Deactivated successfully. Jan 23 23:59:30.599197 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:59:30.600077 systemd-logind[1709]: Removed session 15. Jan 23 23:59:31.432285 kubelet[3215]: E0123 23:59:31.432195 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:59:31.432285 kubelet[3215]: E0123 23:59:31.432206 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:59:32.432971 containerd[1738]: time="2026-01-23T23:59:32.432884712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:59:32.713571 containerd[1738]: time="2026-01-23T23:59:32.713519092Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:32.717793 containerd[1738]: time="2026-01-23T23:59:32.717732963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:59:32.717903 containerd[1738]: time="2026-01-23T23:59:32.717877203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:59:32.718160 kubelet[3215]: E0123 23:59:32.718119 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:59:32.718520 kubelet[3215]: E0123 23:59:32.718169 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:59:32.718520 kubelet[3215]: E0123 23:59:32.718248 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-58bcdfdb7b-m76fr_calico-system(c69a3a9c-9be4-419b-bd7b-2c7c74ce300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:32.720991 containerd[1738]: time="2026-01-23T23:59:32.720952516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:59:32.984082 containerd[1738]: time="2026-01-23T23:59:32.983350536Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:32.988975 containerd[1738]: time="2026-01-23T23:59:32.988249926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:59:32.988975 containerd[1738]: time="2026-01-23T23:59:32.988320605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:59:32.989128 kubelet[3215]: E0123 23:59:32.988473 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:59:32.989128 kubelet[3215]: E0123 23:59:32.988517 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:59:32.989128 kubelet[3215]: E0123 23:59:32.988585 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-58bcdfdb7b-m76fr_calico-system(c69a3a9c-9be4-419b-bd7b-2c7c74ce300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:32.989219 kubelet[3215]: E0123 23:59:32.988621 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 23 23:59:35.684611 systemd[1]: Started sshd@13-10.200.20.33:22-10.200.16.10:54992.service - OpenSSH per-connection server daemon (10.200.16.10:54992). Jan 23 23:59:36.136177 sshd[6004]: Accepted publickey for core from 10.200.16.10 port 54992 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:36.138909 sshd[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:36.144877 systemd-logind[1709]: New session 16 of user core. Jan 23 23:59:36.152497 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:59:36.555517 sshd[6004]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:36.560491 systemd[1]: sshd@13-10.200.20.33:22-10.200.16.10:54992.service: Deactivated successfully. Jan 23 23:59:36.565777 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:59:36.567378 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:59:36.569887 systemd-logind[1709]: Removed session 16. Jan 23 23:59:40.432547 kubelet[3215]: E0123 23:59:40.431831 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:59:41.636748 systemd[1]: Started sshd@14-10.200.20.33:22-10.200.16.10:46924.service - OpenSSH per-connection server daemon (10.200.16.10:46924). Jan 23 23:59:42.091993 sshd[6039]: Accepted publickey for core from 10.200.16.10 port 46924 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:42.095485 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:42.101584 systemd-logind[1709]: New session 17 of user core. Jan 23 23:59:42.108628 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:59:42.538309 sshd[6039]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:42.542369 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:59:42.543048 systemd[1]: sshd@14-10.200.20.33:22-10.200.16.10:46924.service: Deactivated successfully. Jan 23 23:59:42.547679 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:59:42.550429 systemd-logind[1709]: Removed session 17. Jan 23 23:59:42.624455 systemd[1]: Started sshd@15-10.200.20.33:22-10.200.16.10:46926.service - OpenSSH per-connection server daemon (10.200.16.10:46926). Jan 23 23:59:43.071622 sshd[6051]: Accepted publickey for core from 10.200.16.10 port 46926 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:43.074133 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:43.079458 systemd-logind[1709]: New session 18 of user core. Jan 23 23:59:43.086611 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:59:43.431824 containerd[1738]: time="2026-01-23T23:59:43.431655350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:59:43.602014 sshd[6051]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:43.606964 systemd[1]: sshd@15-10.200.20.33:22-10.200.16.10:46926.service: Deactivated successfully. Jan 23 23:59:43.609836 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:59:43.611762 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:59:43.612929 systemd-logind[1709]: Removed session 18. Jan 23 23:59:43.685739 systemd[1]: Started sshd@16-10.200.20.33:22-10.200.16.10:46936.service - OpenSSH per-connection server daemon (10.200.16.10:46936). Jan 23 23:59:43.715391 containerd[1738]: time="2026-01-23T23:59:43.715302758Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:43.721022 containerd[1738]: time="2026-01-23T23:59:43.720972025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:59:43.721091 containerd[1738]: time="2026-01-23T23:59:43.721080025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:59:43.721450 kubelet[3215]: E0123 23:59:43.721240 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:59:43.721450 kubelet[3215]: E0123 23:59:43.721289 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:59:43.721804 kubelet[3215]: E0123 23:59:43.721503 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dbb8cd949-trksp_calico-system(1a7bcaec-bb3f-491a-bd0f-d443085a7496): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:43.721804 kubelet[3215]: E0123 23:59:43.721544 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:59:43.722256 containerd[1738]: time="2026-01-23T23:59:43.722227662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:59:43.963468 containerd[1738]: time="2026-01-23T23:59:43.963276725Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:43.965944 containerd[1738]: time="2026-01-23T23:59:43.965840919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:59:43.965944 containerd[1738]: time="2026-01-23T23:59:43.965913199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:43.966093 kubelet[3215]: E0123 23:59:43.966046 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:59:43.966142 kubelet[3215]: E0123 23:59:43.966089 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:59:43.966445 kubelet[3215]: E0123 23:59:43.966162 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wtg4h_calico-system(4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:43.966445 kubelet[3215]: E0123 23:59:43.966197 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:59:44.142492 sshd[6070]: Accepted publickey for core from 10.200.16.10 port 46936 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:44.143927 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:44.149708 systemd-logind[1709]: New session 19 of user core. Jan 23 23:59:44.153590 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:59:44.435613 containerd[1738]: time="2026-01-23T23:59:44.435494353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:59:44.700007 containerd[1738]: time="2026-01-23T23:59:44.699960923Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:44.703394 containerd[1738]: time="2026-01-23T23:59:44.702916757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:59:44.703394 containerd[1738]: time="2026-01-23T23:59:44.703036037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:59:44.703556 kubelet[3215]: E0123 23:59:44.703210 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:44.703556 kubelet[3215]: E0123 23:59:44.703252 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:44.703556 kubelet[3215]: E0123 23:59:44.703318 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:44.705382 containerd[1738]: time="2026-01-23T23:59:44.705168632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:59:44.989332 containerd[1738]: time="2026-01-23T23:59:44.988991679Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:44.997510 containerd[1738]: time="2026-01-23T23:59:44.994187268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:59:44.997510 containerd[1738]: time="2026-01-23T23:59:44.994324907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:59:44.998010 kubelet[3215]: E0123 23:59:44.997960 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:44.998881 kubelet[3215]: E0123 23:59:44.998020 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:44.999289 kubelet[3215]: E0123 23:59:44.999189 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kzqw2_calico-system(9e81a44d-05db-4251-91b5-ae7d0d2169e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:44.999289 kubelet[3215]: E0123 23:59:44.999248 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:59:45.260738 sshd[6070]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:45.267175 systemd[1]: sshd@16-10.200.20.33:22-10.200.16.10:46936.service: Deactivated successfully. Jan 23 23:59:45.267208 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:59:45.269823 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:59:45.270929 systemd-logind[1709]: Removed session 19. Jan 23 23:59:45.341405 systemd[1]: Started sshd@17-10.200.20.33:22-10.200.16.10:46942.service - OpenSSH per-connection server daemon (10.200.16.10:46942). Jan 23 23:59:45.432147 containerd[1738]: time="2026-01-23T23:59:45.431895492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:45.705324 containerd[1738]: time="2026-01-23T23:59:45.705209523Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:45.708924 containerd[1738]: time="2026-01-23T23:59:45.708610636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:45.708924 containerd[1738]: time="2026-01-23T23:59:45.708732236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:45.709280 kubelet[3215]: E0123 23:59:45.708904 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:45.709280 kubelet[3215]: E0123 23:59:45.709055 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:45.709575 kubelet[3215]: E0123 23:59:45.709220 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7ddd4879dc-2hqft_calico-apiserver(0adf700e-5270-411b-82bf-1b013a95c851): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:45.709646 kubelet[3215]: E0123 23:59:45.709358 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:59:45.809432 sshd[6094]: Accepted publickey for core from 10.200.16.10 port 46942 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:45.811063 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:45.817911 systemd-logind[1709]: New session 20 of user core. Jan 23 23:59:45.819643 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:59:46.359981 sshd[6094]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:46.368690 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:59:46.369030 systemd[1]: sshd@17-10.200.20.33:22-10.200.16.10:46942.service: Deactivated successfully. Jan 23 23:59:46.377000 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:59:46.383232 systemd-logind[1709]: Removed session 20. Jan 23 23:59:46.449683 systemd[1]: Started sshd@18-10.200.20.33:22-10.200.16.10:46948.service - OpenSSH per-connection server daemon (10.200.16.10:46948). Jan 23 23:59:46.906934 sshd[6108]: Accepted publickey for core from 10.200.16.10 port 46948 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:46.908379 sshd[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:46.915487 systemd-logind[1709]: New session 21 of user core. Jan 23 23:59:46.920577 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:59:47.316361 sshd[6108]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:47.321225 systemd[1]: sshd@18-10.200.20.33:22-10.200.16.10:46948.service: Deactivated successfully. Jan 23 23:59:47.326544 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:59:47.331624 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:59:47.332835 systemd-logind[1709]: Removed session 21. Jan 23 23:59:47.435530 kubelet[3215]: E0123 23:59:47.434485 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 23 23:59:52.394469 systemd[1]: Started sshd@19-10.200.20.33:22-10.200.16.10:36002.service - OpenSSH per-connection server daemon (10.200.16.10:36002). Jan 23 23:59:52.857273 sshd[6137]: Accepted publickey for core from 10.200.16.10 port 36002 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:52.875215 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:52.879830 systemd-logind[1709]: New session 22 of user core. Jan 23 23:59:52.885580 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:59:53.249636 sshd[6137]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:53.253624 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:59:53.254946 systemd[1]: sshd@19-10.200.20.33:22-10.200.16.10:36002.service: Deactivated successfully. Jan 23 23:59:53.257455 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:59:53.258353 systemd-logind[1709]: Removed session 22. Jan 23 23:59:55.432173 containerd[1738]: time="2026-01-23T23:59:55.431971207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:59:55.803307 containerd[1738]: time="2026-01-23T23:59:55.803251382Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:55.805877 containerd[1738]: time="2026-01-23T23:59:55.805809017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:59:55.805992 containerd[1738]: time="2026-01-23T23:59:55.805932217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:59:55.806819 kubelet[3215]: E0123 23:59:55.806598 3215 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:55.806819 kubelet[3215]: E0123 23:59:55.806652 3215 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:59:55.806819 kubelet[3215]: E0123 23:59:55.806735 3215 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7ddd4879dc-2rp8t_calico-apiserver(2db427fa-9e25-4d91-9748-361f655acfc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:55.806819 kubelet[3215]: E0123 23:59:55.806771 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 23 23:59:56.432614 kubelet[3215]: E0123 23:59:56.432174 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 23 23:59:58.331683 systemd[1]: Started sshd@20-10.200.20.33:22-10.200.16.10:36012.service - OpenSSH per-connection server daemon (10.200.16.10:36012). Jan 23 23:59:58.433145 kubelet[3215]: E0123 23:59:58.433098 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 23 23:59:58.751549 sshd[6150]: Accepted publickey for core from 10.200.16.10 port 36012 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:59:58.752105 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:58.756117 systemd-logind[1709]: New session 23 of user core. Jan 23 23:59:58.762553 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:59:59.144655 sshd[6150]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:59.149520 systemd[1]: sshd@20-10.200.20.33:22-10.200.16.10:36012.service: Deactivated successfully. Jan 23 23:59:59.155145 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:59:59.155928 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:59:59.157141 systemd-logind[1709]: Removed session 23. Jan 23 23:59:59.432746 kubelet[3215]: E0123 23:59:59.432154 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 23 23:59:59.434505 kubelet[3215]: E0123 23:59:59.433480 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 23 23:59:59.434505 kubelet[3215]: E0123 23:59:59.433660 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 24 00:00:04.247701 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jan 24 00:00:04.251487 systemd[1]: Started sshd@21-10.200.20.33:22-10.200.16.10:33620.service - OpenSSH per-connection server daemon (10.200.16.10:33620). Jan 24 00:00:04.264802 systemd[1]: logrotate.service: Deactivated successfully. Jan 24 00:00:04.744433 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 33620 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:04.745508 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:04.753652 systemd-logind[1709]: New session 24 of user core. Jan 24 00:00:04.758628 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:00:05.175469 sshd[6165]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:05.180289 systemd[1]: sshd@21-10.200.20.33:22-10.200.16.10:33620.service: Deactivated successfully. Jan 24 00:00:05.183006 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:00:05.185908 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:00:05.187446 systemd-logind[1709]: Removed session 24. Jan 24 00:00:06.433289 kubelet[3215]: E0124 00:00:06.433171 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 24 00:00:07.430981 kubelet[3215]: E0124 00:00:07.430930 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 24 00:00:10.254850 systemd[1]: Started sshd@22-10.200.20.33:22-10.200.16.10:35474.service - OpenSSH per-connection server daemon (10.200.16.10:35474). Jan 24 00:00:10.432655 kubelet[3215]: E0124 00:00:10.432605 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 24 00:00:10.707073 sshd[6178]: Accepted publickey for core from 10.200.16.10 port 35474 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:10.708548 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:10.712665 systemd-logind[1709]: New session 25 of user core. Jan 24 00:00:10.721700 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:00:11.103013 sshd[6178]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:11.107010 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:00:11.109232 systemd[1]: sshd@22-10.200.20.33:22-10.200.16.10:35474.service: Deactivated successfully. Jan 24 00:00:11.113938 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:00:11.115396 systemd-logind[1709]: Removed session 25. Jan 24 00:00:11.591080 systemd[1]: run-containerd-runc-k8s.io-49d41bde9c0e5be0ee731d5b1414e8ecd90cb8e6882869012a31193ebb74f2f7-runc.ecmWZQ.mount: Deactivated successfully. Jan 24 00:00:12.435400 kubelet[3215]: E0124 00:00:12.434004 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6" Jan 24 00:00:13.432737 kubelet[3215]: E0124 00:00:13.432490 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wtg4h" podUID="4d7c1094-ee9c-40ad-a8ec-c1150bd1b90f" Jan 24 00:00:13.432737 kubelet[3215]: E0124 00:00:13.432639 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58bcdfdb7b-m76fr" podUID="c69a3a9c-9be4-419b-bd7b-2c7c74ce300e" Jan 24 00:00:16.196353 systemd[1]: Started sshd@23-10.200.20.33:22-10.200.16.10:35488.service - OpenSSH per-connection server daemon (10.200.16.10:35488). Jan 24 00:00:16.649761 sshd[6212]: Accepted publickey for core from 10.200.16.10 port 35488 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:16.651203 sshd[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:16.658940 systemd-logind[1709]: New session 26 of user core. Jan 24 00:00:16.662836 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:00:17.053546 sshd[6212]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:17.057386 systemd[1]: sshd@23-10.200.20.33:22-10.200.16.10:35488.service: Deactivated successfully. Jan 24 00:00:17.059390 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:00:17.060212 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:00:17.061320 systemd-logind[1709]: Removed session 26. Jan 24 00:00:21.431753 kubelet[3215]: E0124 00:00:21.431694 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2rp8t" podUID="2db427fa-9e25-4d91-9748-361f655acfc7" Jan 24 00:00:22.142687 systemd[1]: Started sshd@24-10.200.20.33:22-10.200.16.10:36316.service - OpenSSH per-connection server daemon (10.200.16.10:36316). Jan 24 00:00:22.436539 kubelet[3215]: E0124 00:00:22.436231 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4879dc-2hqft" podUID="0adf700e-5270-411b-82bf-1b013a95c851" Jan 24 00:00:22.441344 kubelet[3215]: E0124 00:00:22.441279 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dbb8cd949-trksp" podUID="1a7bcaec-bb3f-491a-bd0f-d443085a7496" Jan 24 00:00:22.644854 sshd[6224]: Accepted publickey for core from 10.200.16.10 port 36316 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 24 00:00:22.647060 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:00:22.652755 systemd-logind[1709]: New session 27 of user core. Jan 24 00:00:22.659983 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 00:00:23.094369 sshd[6224]: pam_unix(sshd:session): session closed for user core Jan 24 00:00:23.099504 systemd-logind[1709]: Session 27 logged out. Waiting for processes to exit. Jan 24 00:00:23.099944 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 00:00:23.102709 systemd[1]: sshd@24-10.200.20.33:22-10.200.16.10:36316.service: Deactivated successfully. Jan 24 00:00:23.105470 systemd-logind[1709]: Removed session 27. Jan 24 00:00:23.432714 kubelet[3215]: E0124 00:00:23.431885 3215 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kzqw2" podUID="9e81a44d-05db-4251-91b5-ae7d0d2169e6"