Jan 28 01:23:45.173300 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 28 01:23:45.173321 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Jan 27 23:05:14 -00 2026 Jan 28 01:23:45.173329 kernel: KASLR enabled Jan 28 01:23:45.173335 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 01:23:45.173342 kernel: printk: bootconsole [pl11] enabled Jan 28 01:23:45.173348 kernel: efi: EFI v2.7 by EDK II Jan 28 01:23:45.173355 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 28 01:23:45.173361 kernel: random: crng init done Jan 28 01:23:45.173367 kernel: ACPI: Early table checksum verification disabled Jan 28 01:23:45.173373 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 28 01:23:45.173379 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173384 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173392 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 01:23:45.173398 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173406 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173412 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173418 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173426 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173432 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173438 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 01:23:45.173445 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173451 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 01:23:45.173457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 28 01:23:45.173464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 28 01:23:45.173470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 28 01:23:45.173476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 28 01:23:45.173482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 28 01:23:45.173489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 28 01:23:45.173497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 28 01:23:45.173503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 28 01:23:45.173510 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 28 01:23:45.173516 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 28 01:23:45.173522 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 28 01:23:45.173528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 28 01:23:45.173535 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 28 01:23:45.173541 kernel: Zone ranges: Jan 28 01:23:45.173547 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 01:23:45.173553 kernel: DMA32 empty Jan 28 01:23:45.173559 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:23:45.173566 kernel: Movable zone start for each node Jan 28 01:23:45.173576 kernel: Early memory node ranges Jan 28 01:23:45.173582 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 01:23:45.173589 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 28 01:23:45.173596 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 28 01:23:45.173603 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 28 01:23:45.173611 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 28 01:23:45.175662 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 28 01:23:45.175686 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:23:45.175695 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 01:23:45.175702 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 01:23:45.175709 kernel: psci: probing for conduit method from ACPI. Jan 28 01:23:45.175716 kernel: psci: PSCIv1.1 detected in firmware. Jan 28 01:23:45.175723 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 01:23:45.175730 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 01:23:45.175737 kernel: psci: SMC Calling Convention v1.4 Jan 28 01:23:45.175744 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 01:23:45.175750 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 01:23:45.175764 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 28 01:23:45.175771 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 28 01:23:45.175778 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 01:23:45.175785 kernel: Detected PIPT I-cache on CPU0 Jan 28 01:23:45.175791 kernel: CPU features: detected: GIC system register CPU interface Jan 28 01:23:45.175798 kernel: CPU features: detected: Hardware dirty bit management Jan 28 01:23:45.175805 kernel: CPU features: detected: Spectre-BHB Jan 28 01:23:45.175812 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 01:23:45.175819 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 01:23:45.175826 kernel: CPU features: detected: ARM erratum 1418040 Jan 28 01:23:45.175832 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 28 01:23:45.175841 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 01:23:45.175848 kernel: alternatives: applying boot alternatives Jan 28 01:23:45.175856 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:23:45.175864 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:23:45.175870 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:23:45.175877 kernel: Fallback order for Node 0: 0 Jan 28 01:23:45.175884 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 28 01:23:45.175890 kernel: Policy zone: Normal Jan 28 01:23:45.175897 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:23:45.175904 kernel: software IO TLB: area num 2. Jan 28 01:23:45.175911 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 28 01:23:45.175919 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 28 01:23:45.175926 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 01:23:45.175933 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:23:45.175940 kernel: rcu: RCU event tracing is enabled. Jan 28 01:23:45.175947 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 01:23:45.175954 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:23:45.175961 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:23:45.175968 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:23:45.175975 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 01:23:45.175982 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 01:23:45.175988 kernel: GICv3: 960 SPIs implemented Jan 28 01:23:45.175996 kernel: GICv3: 0 Extended SPIs implemented Jan 28 01:23:45.176003 kernel: Root IRQ handler: gic_handle_irq Jan 28 01:23:45.176010 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 01:23:45.176016 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 01:23:45.176023 kernel: ITS: No ITS available, not enabling LPIs Jan 28 01:23:45.176030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:23:45.176037 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:23:45.176044 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 28 01:23:45.176051 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 28 01:23:45.176058 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 28 01:23:45.176065 kernel: Console: colour dummy device 80x25 Jan 28 01:23:45.176074 kernel: printk: console [tty1] enabled Jan 28 01:23:45.176081 kernel: ACPI: Core revision 20230628 Jan 28 01:23:45.176088 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 28 01:23:45.176095 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:23:45.176102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:23:45.176109 kernel: landlock: Up and running. Jan 28 01:23:45.176116 kernel: SELinux: Initializing. Jan 28 01:23:45.176123 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:23:45.176130 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:23:45.176139 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:23:45.176146 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:23:45.176153 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 28 01:23:45.176160 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 28 01:23:45.176167 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 01:23:45.176174 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:23:45.176181 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:23:45.176188 kernel: Remapping and enabling EFI services. Jan 28 01:23:45.176201 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:23:45.176209 kernel: Detected PIPT I-cache on CPU1 Jan 28 01:23:45.176216 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 01:23:45.176223 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:23:45.176232 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 28 01:23:45.176240 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 01:23:45.176247 kernel: SMP: Total of 2 processors activated. Jan 28 01:23:45.176255 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 01:23:45.176262 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 01:23:45.176271 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 01:23:45.176279 kernel: CPU features: detected: CRC32 instructions Jan 28 01:23:45.176286 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 01:23:45.176293 kernel: CPU features: detected: LSE atomic instructions Jan 28 01:23:45.176301 kernel: CPU features: detected: Privileged Access Never Jan 28 01:23:45.176308 kernel: CPU: All CPU(s) started at EL1 Jan 28 01:23:45.176315 kernel: alternatives: applying system-wide alternatives Jan 28 01:23:45.176323 kernel: devtmpfs: initialized Jan 28 01:23:45.176330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:23:45.176339 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 01:23:45.176346 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:23:45.176354 kernel: SMBIOS 3.1.0 present. Jan 28 01:23:45.176361 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 28 01:23:45.176369 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:23:45.176376 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 01:23:45.176384 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 01:23:45.176391 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 01:23:45.176398 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:23:45.176407 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 28 01:23:45.176414 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:23:45.176422 kernel: cpuidle: using governor menu Jan 28 01:23:45.176429 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 01:23:45.176436 kernel: ASID allocator initialised with 32768 entries Jan 28 01:23:45.176444 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:23:45.176451 kernel: Serial: AMBA PL011 UART driver Jan 28 01:23:45.176458 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 01:23:45.176466 kernel: Modules: 0 pages in range for non-PLT usage Jan 28 01:23:45.176475 kernel: Modules: 509008 pages in range for PLT usage Jan 28 01:23:45.176482 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:23:45.176489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:23:45.176497 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 01:23:45.176504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 01:23:45.176511 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:23:45.176519 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:23:45.176526 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 01:23:45.176533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 01:23:45.176542 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:23:45.176549 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:23:45.176557 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:23:45.176564 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:23:45.176571 kernel: ACPI: Interpreter enabled Jan 28 01:23:45.176578 kernel: ACPI: Using GIC for interrupt routing Jan 28 01:23:45.176586 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 01:23:45.176593 kernel: printk: console [ttyAMA0] enabled Jan 28 01:23:45.176600 kernel: printk: bootconsole [pl11] disabled Jan 28 01:23:45.176609 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 01:23:45.176623 kernel: iommu: Default domain type: Translated Jan 28 01:23:45.176632 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 01:23:45.176639 kernel: efivars: Registered efivars operations Jan 28 01:23:45.176646 kernel: vgaarb: loaded Jan 28 01:23:45.176653 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 01:23:45.176661 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:23:45.176668 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:23:45.176676 kernel: pnp: PnP ACPI init Jan 28 01:23:45.176684 kernel: pnp: PnP ACPI: found 0 devices Jan 28 01:23:45.176692 kernel: NET: Registered PF_INET protocol family Jan 28 01:23:45.176699 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:23:45.176707 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:23:45.176714 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:23:45.176722 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:23:45.176729 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:23:45.176737 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:23:45.176744 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:23:45.176753 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:23:45.176760 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:23:45.176767 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:23:45.176775 kernel: kvm [1]: HYP mode not available Jan 28 01:23:45.176782 kernel: Initialise system trusted keyrings Jan 28 01:23:45.176789 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:23:45.176796 kernel: Key type asymmetric registered Jan 28 01:23:45.176804 kernel: Asymmetric key parser 'x509' registered Jan 28 01:23:45.176811 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:23:45.176820 kernel: io scheduler mq-deadline registered Jan 28 01:23:45.176827 kernel: io scheduler kyber registered Jan 28 01:23:45.176835 kernel: io scheduler bfq registered Jan 28 01:23:45.176842 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:23:45.176849 kernel: thunder_xcv, ver 1.0 Jan 28 01:23:45.176856 kernel: thunder_bgx, ver 1.0 Jan 28 01:23:45.176863 kernel: nicpf, ver 1.0 Jan 28 01:23:45.176871 kernel: nicvf, ver 1.0 Jan 28 01:23:45.177018 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 01:23:45.177093 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T01:23:44 UTC (1769563424) Jan 28 01:23:45.177103 kernel: efifb: probing for efifb Jan 28 01:23:45.177111 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 01:23:45.177118 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 01:23:45.177125 kernel: efifb: scrolling: redraw Jan 28 01:23:45.177133 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:23:45.177140 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:23:45.177147 kernel: fb0: EFI VGA frame buffer device Jan 28 01:23:45.177157 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 01:23:45.177164 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 01:23:45.177172 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 28 01:23:45.177179 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 28 01:23:45.177186 kernel: watchdog: Hard watchdog permanently disabled Jan 28 01:23:45.177194 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:23:45.177201 kernel: Segment Routing with IPv6 Jan 28 01:23:45.177208 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:23:45.177215 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:23:45.177224 kernel: Key type dns_resolver registered Jan 28 01:23:45.177231 kernel: registered taskstats version 1 Jan 28 01:23:45.177238 kernel: Loading compiled-in X.509 certificates Jan 28 01:23:45.177246 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 00ce1dc8bc64b61f07099b23b76dee034878817c' Jan 28 01:23:45.177253 kernel: Key type .fscrypt registered Jan 28 01:23:45.177260 kernel: Key type fscrypt-provisioning registered Jan 28 01:23:45.177268 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:23:45.177275 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:23:45.177282 kernel: ima: No architecture policies found Jan 28 01:23:45.177291 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 01:23:45.177299 kernel: clk: Disabling unused clocks Jan 28 01:23:45.177306 kernel: Freeing unused kernel memory: 39424K Jan 28 01:23:45.177313 kernel: Run /init as init process Jan 28 01:23:45.177320 kernel: with arguments: Jan 28 01:23:45.177327 kernel: /init Jan 28 01:23:45.177334 kernel: with environment: Jan 28 01:23:45.177341 kernel: HOME=/ Jan 28 01:23:45.177349 kernel: TERM=linux Jan 28 01:23:45.177358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:23:45.177369 systemd[1]: Detected virtualization microsoft. Jan 28 01:23:45.177377 systemd[1]: Detected architecture arm64. Jan 28 01:23:45.177385 systemd[1]: Running in initrd. Jan 28 01:23:45.177392 systemd[1]: No hostname configured, using default hostname. Jan 28 01:23:45.177400 systemd[1]: Hostname set to . Jan 28 01:23:45.177408 systemd[1]: Initializing machine ID from random generator. Jan 28 01:23:45.177418 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:23:45.177426 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:23:45.177434 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:23:45.177442 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:23:45.177451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:23:45.177459 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:23:45.177467 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:23:45.177476 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:23:45.177486 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:23:45.177494 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:23:45.177502 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:23:45.177509 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:23:45.177517 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:23:45.177525 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:23:45.177533 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:23:45.177541 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:23:45.177550 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:23:45.177558 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:23:45.177566 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:23:45.177574 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:23:45.177582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:23:45.177590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:23:45.177598 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:23:45.177606 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:23:45.177615 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:23:45.179523 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:23:45.179533 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:23:45.179541 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:23:45.179549 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:23:45.179586 systemd-journald[217]: Collecting audit messages is disabled. Jan 28 01:23:45.179612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:45.179630 systemd-journald[217]: Journal started Jan 28 01:23:45.179650 systemd-journald[217]: Runtime Journal (/run/log/journal/2629c1bddd274a0781cbad33420dcedf) is 8.0M, max 78.5M, 70.5M free. Jan 28 01:23:45.179712 systemd-modules-load[218]: Inserted module 'overlay' Jan 28 01:23:45.198409 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:23:45.193956 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:23:45.206684 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:23:45.227411 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:23:45.227435 kernel: Bridge firewalling registered Jan 28 01:23:45.221526 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 28 01:23:45.223590 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:23:45.231159 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:23:45.239590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:45.257906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:23:45.265762 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:23:45.280873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:23:45.304877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:23:45.311290 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:45.323013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:23:45.338229 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:23:45.343809 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:23:45.367038 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:23:45.376889 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:23:45.385809 dracut-cmdline[252]: dracut-dracut-053 Jan 28 01:23:45.390865 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:23:45.414789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:23:45.434944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:23:45.456406 systemd-resolved[259]: Positive Trust Anchors: Jan 28 01:23:45.456424 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:23:45.456456 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:23:45.458603 systemd-resolved[259]: Defaulting to hostname 'linux'. Jan 28 01:23:45.459458 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:23:45.464725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:23:45.540638 kernel: SCSI subsystem initialized Jan 28 01:23:45.547643 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:23:45.556702 kernel: iscsi: registered transport (tcp) Jan 28 01:23:45.572572 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:23:45.572603 kernel: QLogic iSCSI HBA Driver Jan 28 01:23:45.610564 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:23:45.622944 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:23:45.651829 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:23:45.651888 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:23:45.656709 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:23:45.704636 kernel: raid6: neonx8 gen() 15815 MB/s Jan 28 01:23:45.723625 kernel: raid6: neonx4 gen() 15689 MB/s Jan 28 01:23:45.742629 kernel: raid6: neonx2 gen() 13309 MB/s Jan 28 01:23:45.762630 kernel: raid6: neonx1 gen() 10548 MB/s Jan 28 01:23:45.781625 kernel: raid6: int64x8 gen() 6971 MB/s Jan 28 01:23:45.800641 kernel: raid6: int64x4 gen() 7362 MB/s Jan 28 01:23:45.820629 kernel: raid6: int64x2 gen() 6146 MB/s Jan 28 01:23:45.842035 kernel: raid6: int64x1 gen() 5072 MB/s Jan 28 01:23:45.842047 kernel: raid6: using algorithm neonx8 gen() 15815 MB/s Jan 28 01:23:45.864172 kernel: raid6: .... xor() 11956 MB/s, rmw enabled Jan 28 01:23:45.864183 kernel: raid6: using neon recovery algorithm Jan 28 01:23:45.873825 kernel: xor: measuring software checksum speed Jan 28 01:23:45.873879 kernel: 8regs : 19764 MB/sec Jan 28 01:23:45.876650 kernel: 32regs : 19552 MB/sec Jan 28 01:23:45.880233 kernel: arm64_neon : 27007 MB/sec Jan 28 01:23:45.883536 kernel: xor: using function: arm64_neon (27007 MB/sec) Jan 28 01:23:45.933638 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:23:45.942663 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:23:45.955735 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:23:45.974185 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jan 28 01:23:45.978431 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:23:45.991847 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:23:46.011550 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Jan 28 01:23:46.038752 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:23:46.052745 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:23:46.088206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:23:46.103866 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:23:46.125965 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:23:46.135693 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:23:46.152595 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:23:46.169049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:23:46.191068 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:23:46.207339 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:23:46.207493 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:46.226837 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:23:46.242678 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 01:23:46.242700 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 01:23:46.242711 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 01:23:46.231880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:46.281812 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 28 01:23:46.281842 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 28 01:23:46.281853 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 01:23:46.281862 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 01:23:46.232183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:46.299732 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 01:23:46.268309 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:46.308915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:46.328029 kernel: PTP clock support registered Jan 28 01:23:46.328051 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 01:23:46.328061 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 01:23:46.323315 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:23:46.354724 kernel: scsi host1: storvsc_host_t Jan 28 01:23:46.354897 kernel: scsi host0: storvsc_host_t Jan 28 01:23:46.354997 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 01:23:46.333954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:46.358963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:46.377292 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 01:23:46.359114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:46.372765 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:46.393924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:46.411458 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 01:23:46.411482 kernel: hv_vmbus: registering driver hv_utils Jan 28 01:23:46.418627 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 01:23:46.418675 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 01:23:46.747904 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 01:23:46.747942 kernel: hv_netvsc 7ced8d7a-2afc-7ced-8d7a-2afc7ced8d7a eth0: VF slot 1 added Jan 28 01:23:46.747863 systemd-resolved[259]: Clock change detected. Flushing caches. Jan 28 01:23:46.764118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:46.785317 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 01:23:46.785497 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:23:46.785509 kernel: hv_vmbus: registering driver hv_pci Jan 28 01:23:46.785518 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 01:23:46.794548 kernel: hv_pci 20ca457c-bf4f-4d10-9e63-6f409154064f: PCI VMBus probing: Using version 0x10004 Jan 28 01:23:46.796490 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:23:46.820398 kernel: hv_pci 20ca457c-bf4f-4d10-9e63-6f409154064f: PCI host bridge to bus bf4f:00 Jan 28 01:23:46.820574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#170 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:23:46.833883 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 01:23:46.834126 kernel: pci_bus bf4f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 01:23:46.834225 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 01:23:46.834311 kernel: pci_bus bf4f:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 01:23:46.843293 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 01:23:46.843529 kernel: pci bf4f:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 28 01:23:46.846759 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 01:23:46.853014 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 01:23:46.853204 kernel: pci bf4f:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:23:46.862304 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:46.888028 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:46.888053 kernel: pci bf4f:00:02.0: enabling Extended Tags Jan 28 01:23:46.888081 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 01:23:46.905543 kernel: pci bf4f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bf4f:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 28 01:23:46.905654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#99 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:23:46.916346 kernel: pci_bus bf4f:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 01:23:46.916631 kernel: pci bf4f:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:23:46.962483 kernel: mlx5_core bf4f:00:02.0: enabling device (0000 -> 0002) Jan 28 01:23:46.962740 kernel: mlx5_core bf4f:00:02.0: firmware version: 16.30.5026 Jan 28 01:23:47.161497 kernel: hv_netvsc 7ced8d7a-2afc-7ced-8d7a-2afc7ced8d7a eth0: VF registering: eth1 Jan 28 01:23:47.161689 kernel: mlx5_core bf4f:00:02.0 eth1: joined to eth0 Jan 28 01:23:47.166679 kernel: mlx5_core bf4f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 01:23:47.179500 kernel: mlx5_core bf4f:00:02.0 enP48975s1: renamed from eth1 Jan 28 01:23:47.458893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 01:23:47.517480 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (487) Jan 28 01:23:47.531559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 01:23:47.572490 kernel: BTRFS: device fsid 0fc26676-8036-4cd5-8c30-2943afb25b0b devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (485) Jan 28 01:23:47.585554 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 01:23:47.591333 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 28 01:23:47.615578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:23:47.635079 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 01:23:47.650472 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:47.658473 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:47.667475 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:48.669528 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:48.670066 disk-uuid[610]: The operation has completed successfully. Jan 28 01:23:48.737554 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:23:48.741290 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:23:48.761566 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:23:48.770948 sh[723]: Success Jan 28 01:23:48.798497 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 28 01:23:49.047940 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:23:49.069579 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:23:49.077623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:23:49.103693 kernel: BTRFS info (device dm-0): first mount of filesystem 0fc26676-8036-4cd5-8c30-2943afb25b0b Jan 28 01:23:49.103749 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:23:49.109078 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:23:49.113032 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:23:49.116281 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:23:49.432363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:23:49.436131 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:23:49.453653 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:23:49.463091 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:23:49.491711 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:49.491762 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:23:49.495167 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:23:49.549814 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:23:49.567652 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:23:49.570717 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:23:49.584236 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:23:49.593473 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:49.595052 systemd-networkd[899]: lo: Link UP Jan 28 01:23:49.595061 systemd-networkd[899]: lo: Gained carrier Jan 28 01:23:49.597157 systemd-networkd[899]: Enumeration completed Jan 28 01:23:49.597750 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:23:49.598026 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:23:49.598030 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:23:49.603625 systemd[1]: Reached target network.target - Network. Jan 28 01:23:49.618771 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:23:49.646732 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:23:49.697470 kernel: mlx5_core bf4f:00:02.0 enP48975s1: Link up Jan 28 01:23:49.733669 kernel: hv_netvsc 7ced8d7a-2afc-7ced-8d7a-2afc7ced8d7a eth0: Data path switched to VF: enP48975s1 Jan 28 01:23:49.733362 systemd-networkd[899]: enP48975s1: Link UP Jan 28 01:23:49.733443 systemd-networkd[899]: eth0: Link UP Jan 28 01:23:49.733558 systemd-networkd[899]: eth0: Gained carrier Jan 28 01:23:49.733566 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:23:49.739640 systemd-networkd[899]: enP48975s1: Gained carrier Jan 28 01:23:49.760493 systemd-networkd[899]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:23:50.582730 ignition[908]: Ignition 2.19.0 Jan 28 01:23:50.582742 ignition[908]: Stage: fetch-offline Jan 28 01:23:50.582778 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:50.587624 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:23:50.582786 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:50.582882 ignition[908]: parsed url from cmdline: "" Jan 28 01:23:50.582885 ignition[908]: no config URL provided Jan 28 01:23:50.582890 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:23:50.582896 ignition[908]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:23:50.582901 ignition[908]: failed to fetch config: resource requires networking Jan 28 01:23:50.618198 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 01:23:50.585866 ignition[908]: Ignition finished successfully Jan 28 01:23:50.631170 ignition[916]: Ignition 2.19.0 Jan 28 01:23:50.631177 ignition[916]: Stage: fetch Jan 28 01:23:50.631384 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:50.631397 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:50.633749 ignition[916]: parsed url from cmdline: "" Jan 28 01:23:50.633754 ignition[916]: no config URL provided Jan 28 01:23:50.633761 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:23:50.633774 ignition[916]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:23:50.633799 ignition[916]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 01:23:50.713510 ignition[916]: GET result: OK Jan 28 01:23:50.713606 ignition[916]: config has been read from IMDS userdata Jan 28 01:23:50.713657 ignition[916]: parsing config with SHA512: aac34cbb961cd82ef616ddef8776492266795d7ad4819dd90080a8b5dad6edd5a2ca29fec6f2f37e73a350aa050c3528726067e8835a5b5726910e8dd5f2f077 Jan 28 01:23:50.717130 unknown[916]: fetched base config from "system" Jan 28 01:23:50.717515 ignition[916]: fetch: fetch complete Jan 28 01:23:50.717137 unknown[916]: fetched base config from "system" Jan 28 01:23:50.717520 ignition[916]: fetch: fetch passed Jan 28 01:23:50.717142 unknown[916]: fetched user config from "azure" Jan 28 01:23:50.717563 ignition[916]: Ignition finished successfully Jan 28 01:23:50.721294 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 01:23:50.742646 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:23:50.762556 ignition[922]: Ignition 2.19.0 Jan 28 01:23:50.762564 ignition[922]: Stage: kargs Jan 28 01:23:50.766695 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:23:50.762733 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:50.762742 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:50.763765 ignition[922]: kargs: kargs passed Jan 28 01:23:50.763811 ignition[922]: Ignition finished successfully Jan 28 01:23:50.790601 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:23:50.805524 ignition[928]: Ignition 2.19.0 Jan 28 01:23:50.805533 ignition[928]: Stage: disks Jan 28 01:23:50.809524 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:23:50.805693 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:50.815848 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:23:50.805701 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:50.824199 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:23:50.806572 ignition[928]: disks: disks passed Jan 28 01:23:50.832859 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:23:50.806629 ignition[928]: Ignition finished successfully Jan 28 01:23:50.841404 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:23:50.850349 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:23:50.868599 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:23:50.949916 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 28 01:23:50.960624 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:23:50.972625 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:23:51.026488 kernel: EXT4-fs (sda9): mounted filesystem 2c7419f5-3bc3-4c5f-b132-f03585db88cd r/w with ordered data mode. Quota mode: none. Jan 28 01:23:51.026723 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:23:51.033636 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:23:51.075522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:23:51.105344 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Jan 28 01:23:51.105395 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:51.105412 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:23:51.108713 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:23:51.108608 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:23:51.117059 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 01:23:51.127744 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:23:51.145552 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:23:51.127776 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:23:51.137380 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:23:51.150191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:23:51.167658 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:23:51.637628 systemd-networkd[899]: eth0: Gained IPv6LL Jan 28 01:23:51.783827 coreos-metadata[963]: Jan 28 01:23:51.783 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:23:51.791213 coreos-metadata[963]: Jan 28 01:23:51.791 INFO Fetch successful Jan 28 01:23:51.795290 coreos-metadata[963]: Jan 28 01:23:51.795 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:23:51.813242 coreos-metadata[963]: Jan 28 01:23:51.813 INFO Fetch successful Jan 28 01:23:51.845936 coreos-metadata[963]: Jan 28 01:23:51.845 INFO wrote hostname ci-4081.3.6-n-20d4350ff0 to /sysroot/etc/hostname Jan 28 01:23:51.854495 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:23:52.137368 initrd-setup-root[977]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:23:52.174906 initrd-setup-root[984]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:23:52.196013 initrd-setup-root[991]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:23:52.215200 initrd-setup-root[998]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:23:53.552507 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:23:53.563678 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:23:53.569866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:23:53.588652 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:53.589003 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:23:53.612389 ignition[1066]: INFO : Ignition 2.19.0 Jan 28 01:23:53.616099 ignition[1066]: INFO : Stage: mount Jan 28 01:23:53.616099 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:53.616099 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:53.616099 ignition[1066]: INFO : mount: mount passed Jan 28 01:23:53.616099 ignition[1066]: INFO : Ignition finished successfully Jan 28 01:23:53.619005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:23:53.627049 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:23:53.646658 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:23:53.661727 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:23:53.680480 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Jan 28 01:23:53.690626 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:53.690661 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:23:53.693987 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:23:53.701494 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:23:53.702310 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:23:53.726834 ignition[1094]: INFO : Ignition 2.19.0 Jan 28 01:23:53.726834 ignition[1094]: INFO : Stage: files Jan 28 01:23:53.733862 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:53.733862 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:53.733862 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:23:53.733862 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:23:53.733862 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:23:54.009178 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:23:54.015023 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:23:54.015023 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:23:54.012036 unknown[1094]: wrote ssh authorized keys file for user: core Jan 28 01:23:54.057554 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 01:23:54.065819 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 28 01:23:54.110146 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 28 01:23:54.577697 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:23:54.950229 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:23:54.950229 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:23:54.983248 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: files passed Jan 28 01:23:54.992722 ignition[1094]: INFO : Ignition finished successfully Jan 28 01:23:54.993409 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:23:55.017196 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:23:55.041608 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:23:55.047887 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:23:55.047978 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:23:55.084098 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:23:55.084098 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:23:55.097611 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:23:55.092692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:23:55.103594 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:23:55.125687 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:23:55.152021 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:23:55.156189 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:23:55.162236 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:23:55.171473 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:23:55.180426 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:23:55.191693 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:23:55.207598 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:23:55.220941 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:23:55.235810 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:23:55.241401 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:23:55.251026 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:23:55.259583 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:23:55.259745 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:23:55.271963 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:23:55.280688 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:23:55.288334 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:23:55.296115 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:23:55.305793 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:23:55.315854 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:23:55.324348 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:23:55.333505 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:23:55.342761 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:23:55.350880 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:23:55.358218 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:23:55.358381 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:23:55.369660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:23:55.378237 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:23:55.387444 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:23:55.387554 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:23:55.397546 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:23:55.397707 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:23:55.410943 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:23:55.411099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:23:55.420116 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:23:55.420265 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:23:55.428574 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 01:23:55.428714 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:23:55.453546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:23:55.475606 ignition[1146]: INFO : Ignition 2.19.0 Jan 28 01:23:55.475606 ignition[1146]: INFO : Stage: umount Jan 28 01:23:55.502317 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:55.502317 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:55.502317 ignition[1146]: INFO : umount: umount passed Jan 28 01:23:55.502317 ignition[1146]: INFO : Ignition finished successfully Jan 28 01:23:55.476815 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:23:55.482714 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:23:55.482924 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:23:55.488373 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:23:55.488588 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:23:55.502635 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:23:55.502733 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:23:55.517781 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:23:55.521279 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:23:55.521386 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:23:55.530793 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:23:55.530844 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:23:55.541155 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:23:55.541204 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:23:55.548784 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 01:23:55.548821 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 01:23:55.557577 systemd[1]: Stopped target network.target - Network. Jan 28 01:23:55.565643 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:23:55.565700 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:23:55.574801 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:23:55.582722 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:23:55.593829 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:23:55.599106 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:23:55.607950 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:23:55.611995 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:23:55.612051 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:23:55.621398 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:23:55.621441 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:23:55.629456 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:23:55.629511 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:23:55.637564 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:23:55.637599 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:23:55.645981 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:23:55.658058 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:23:55.666497 systemd-networkd[899]: eth0: DHCPv6 lease lost Jan 28 01:23:55.670500 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:23:55.670662 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:23:55.685383 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:23:55.685584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:23:55.694798 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:23:55.694848 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:23:55.717646 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:23:55.852304 kernel: hv_netvsc 7ced8d7a-2afc-7ced-8d7a-2afc7ced8d7a eth0: Data path switched from VF: enP48975s1 Jan 28 01:23:55.725052 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:23:55.725118 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:23:55.733973 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:23:55.734012 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:23:55.743264 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:23:55.743308 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:23:55.752163 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:23:55.752202 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:23:55.762161 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:23:55.794053 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:23:55.794255 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:23:55.803430 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:23:55.803543 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:23:55.812182 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:23:55.812217 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:23:55.820440 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:23:55.820539 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:23:55.832736 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:23:55.832782 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:23:55.856482 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:23:55.856540 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:55.880695 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:23:55.890750 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:23:55.890822 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:23:55.905935 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 01:23:55.905993 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:23:55.916970 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:23:55.917017 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:23:55.926148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:55.926190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:55.936771 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:23:55.936863 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:23:55.946431 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:23:55.946931 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:23:56.106035 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:23:56.106152 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:23:56.110709 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:23:56.119295 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:23:56.119357 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:23:56.141721 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:23:56.168145 systemd[1]: Switching root. Jan 28 01:23:56.235045 systemd-journald[217]: Journal stopped Jan 28 01:23:45.173300 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 28 01:23:45.173321 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Jan 27 23:05:14 -00 2026 Jan 28 01:23:45.173329 kernel: KASLR enabled Jan 28 01:23:45.173335 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 01:23:45.173342 kernel: printk: bootconsole [pl11] enabled Jan 28 01:23:45.173348 kernel: efi: EFI v2.7 by EDK II Jan 28 01:23:45.173355 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 28 01:23:45.173361 kernel: random: crng init done Jan 28 01:23:45.173367 kernel: ACPI: Early table checksum verification disabled Jan 28 01:23:45.173373 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 28 01:23:45.173379 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173384 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173392 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 01:23:45.173398 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173406 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173412 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173418 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173426 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173432 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173438 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 01:23:45.173445 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:45.173451 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 01:23:45.173457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 28 01:23:45.173464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 28 01:23:45.173470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 28 01:23:45.173476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 28 01:23:45.173482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 28 01:23:45.173489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 28 01:23:45.173497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 28 01:23:45.173503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 28 01:23:45.173510 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 28 01:23:45.173516 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 28 01:23:45.173522 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 28 01:23:45.173528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 28 01:23:45.173535 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 28 01:23:45.173541 kernel: Zone ranges: Jan 28 01:23:45.173547 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 01:23:45.173553 kernel: DMA32 empty Jan 28 01:23:45.173559 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:23:45.173566 kernel: Movable zone start for each node Jan 28 01:23:45.173576 kernel: Early memory node ranges Jan 28 01:23:45.173582 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 01:23:45.173589 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 28 01:23:45.173596 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 28 01:23:45.173603 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 28 01:23:45.173611 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 28 01:23:45.175662 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 28 01:23:45.175686 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:23:45.175695 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 01:23:45.175702 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 01:23:45.175709 kernel: psci: probing for conduit method from ACPI. Jan 28 01:23:45.175716 kernel: psci: PSCIv1.1 detected in firmware. Jan 28 01:23:45.175723 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 01:23:45.175730 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 01:23:45.175737 kernel: psci: SMC Calling Convention v1.4 Jan 28 01:23:45.175744 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 01:23:45.175750 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 01:23:45.175764 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 28 01:23:45.175771 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 28 01:23:45.175778 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 01:23:45.175785 kernel: Detected PIPT I-cache on CPU0 Jan 28 01:23:45.175791 kernel: CPU features: detected: GIC system register CPU interface Jan 28 01:23:45.175798 kernel: CPU features: detected: Hardware dirty bit management Jan 28 01:23:45.175805 kernel: CPU features: detected: Spectre-BHB Jan 28 01:23:45.175812 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 01:23:45.175819 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 01:23:45.175826 kernel: CPU features: detected: ARM erratum 1418040 Jan 28 01:23:45.175832 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 28 01:23:45.175841 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 01:23:45.175848 kernel: alternatives: applying boot alternatives Jan 28 01:23:45.175856 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:23:45.175864 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:23:45.175870 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:23:45.175877 kernel: Fallback order for Node 0: 0 Jan 28 01:23:45.175884 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 28 01:23:45.175890 kernel: Policy zone: Normal Jan 28 01:23:45.175897 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:23:45.175904 kernel: software IO TLB: area num 2. Jan 28 01:23:45.175911 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 28 01:23:45.175919 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 28 01:23:45.175926 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 01:23:45.175933 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:23:45.175940 kernel: rcu: RCU event tracing is enabled. Jan 28 01:23:45.175947 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 01:23:45.175954 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:23:45.175961 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:23:45.175968 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:23:45.175975 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 01:23:45.175982 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 01:23:45.175988 kernel: GICv3: 960 SPIs implemented Jan 28 01:23:45.175996 kernel: GICv3: 0 Extended SPIs implemented Jan 28 01:23:45.176003 kernel: Root IRQ handler: gic_handle_irq Jan 28 01:23:45.176010 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 01:23:45.176016 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 01:23:45.176023 kernel: ITS: No ITS available, not enabling LPIs Jan 28 01:23:45.176030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:23:45.176037 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:23:45.176044 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 28 01:23:45.176051 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 28 01:23:45.176058 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 28 01:23:45.176065 kernel: Console: colour dummy device 80x25 Jan 28 01:23:45.176074 kernel: printk: console [tty1] enabled Jan 28 01:23:45.176081 kernel: ACPI: Core revision 20230628 Jan 28 01:23:45.176088 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 28 01:23:45.176095 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:23:45.176102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:23:45.176109 kernel: landlock: Up and running. Jan 28 01:23:45.176116 kernel: SELinux: Initializing. Jan 28 01:23:45.176123 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:23:45.176130 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:23:45.176139 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:23:45.176146 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:23:45.176153 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 28 01:23:45.176160 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 28 01:23:45.176167 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 01:23:45.176174 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:23:45.176181 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:23:45.176188 kernel: Remapping and enabling EFI services. Jan 28 01:23:45.176201 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:23:45.176209 kernel: Detected PIPT I-cache on CPU1 Jan 28 01:23:45.176216 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 01:23:45.176223 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:23:45.176232 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 28 01:23:45.176240 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 01:23:45.176247 kernel: SMP: Total of 2 processors activated. Jan 28 01:23:45.176255 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 01:23:45.176262 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 01:23:45.176271 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 01:23:45.176279 kernel: CPU features: detected: CRC32 instructions Jan 28 01:23:45.176286 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 01:23:45.176293 kernel: CPU features: detected: LSE atomic instructions Jan 28 01:23:45.176301 kernel: CPU features: detected: Privileged Access Never Jan 28 01:23:45.176308 kernel: CPU: All CPU(s) started at EL1 Jan 28 01:23:45.176315 kernel: alternatives: applying system-wide alternatives Jan 28 01:23:45.176323 kernel: devtmpfs: initialized Jan 28 01:23:45.176330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:23:45.176339 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 01:23:45.176346 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:23:45.176354 kernel: SMBIOS 3.1.0 present. Jan 28 01:23:45.176361 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 28 01:23:45.176369 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:23:45.176376 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 01:23:45.176384 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 01:23:45.176391 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 01:23:45.176398 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:23:45.176407 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 28 01:23:45.176414 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:23:45.176422 kernel: cpuidle: using governor menu Jan 28 01:23:45.176429 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 01:23:45.176436 kernel: ASID allocator initialised with 32768 entries Jan 28 01:23:45.176444 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:23:45.176451 kernel: Serial: AMBA PL011 UART driver Jan 28 01:23:45.176458 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 01:23:45.176466 kernel: Modules: 0 pages in range for non-PLT usage Jan 28 01:23:45.176475 kernel: Modules: 509008 pages in range for PLT usage Jan 28 01:23:45.176482 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:23:45.176489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:23:45.176497 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 01:23:45.176504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 01:23:45.176511 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:23:45.176519 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:23:45.176526 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 01:23:45.176533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 01:23:45.176542 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:23:45.176549 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:23:45.176557 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:23:45.176564 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:23:45.176571 kernel: ACPI: Interpreter enabled Jan 28 01:23:45.176578 kernel: ACPI: Using GIC for interrupt routing Jan 28 01:23:45.176586 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 01:23:45.176593 kernel: printk: console [ttyAMA0] enabled Jan 28 01:23:45.176600 kernel: printk: bootconsole [pl11] disabled Jan 28 01:23:45.176609 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 01:23:45.176623 kernel: iommu: Default domain type: Translated Jan 28 01:23:45.176632 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 01:23:45.176639 kernel: efivars: Registered efivars operations Jan 28 01:23:45.176646 kernel: vgaarb: loaded Jan 28 01:23:45.176653 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 01:23:45.176661 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:23:45.176668 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:23:45.176676 kernel: pnp: PnP ACPI init Jan 28 01:23:45.176684 kernel: pnp: PnP ACPI: found 0 devices Jan 28 01:23:45.176692 kernel: NET: Registered PF_INET protocol family Jan 28 01:23:45.176699 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:23:45.176707 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:23:45.176714 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:23:45.176722 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:23:45.176729 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:23:45.176737 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:23:45.176744 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:23:45.176753 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:23:45.176760 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:23:45.176767 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:23:45.176775 kernel: kvm [1]: HYP mode not available Jan 28 01:23:45.176782 kernel: Initialise system trusted keyrings Jan 28 01:23:45.176789 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:23:45.176796 kernel: Key type asymmetric registered Jan 28 01:23:45.176804 kernel: Asymmetric key parser 'x509' registered Jan 28 01:23:45.176811 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:23:45.176820 kernel: io scheduler mq-deadline registered Jan 28 01:23:45.176827 kernel: io scheduler kyber registered Jan 28 01:23:45.176835 kernel: io scheduler bfq registered Jan 28 01:23:45.176842 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:23:45.176849 kernel: thunder_xcv, ver 1.0 Jan 28 01:23:45.176856 kernel: thunder_bgx, ver 1.0 Jan 28 01:23:45.176863 kernel: nicpf, ver 1.0 Jan 28 01:23:45.176871 kernel: nicvf, ver 1.0 Jan 28 01:23:45.177018 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 01:23:45.177093 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T01:23:44 UTC (1769563424) Jan 28 01:23:45.177103 kernel: efifb: probing for efifb Jan 28 01:23:45.177111 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 01:23:45.177118 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 01:23:45.177125 kernel: efifb: scrolling: redraw Jan 28 01:23:45.177133 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:23:45.177140 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:23:45.177147 kernel: fb0: EFI VGA frame buffer device Jan 28 01:23:45.177157 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 01:23:45.177164 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 01:23:45.177172 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 28 01:23:45.177179 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 28 01:23:45.177186 kernel: watchdog: Hard watchdog permanently disabled Jan 28 01:23:45.177194 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:23:45.177201 kernel: Segment Routing with IPv6 Jan 28 01:23:45.177208 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:23:45.177215 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:23:45.177224 kernel: Key type dns_resolver registered Jan 28 01:23:45.177231 kernel: registered taskstats version 1 Jan 28 01:23:45.177238 kernel: Loading compiled-in X.509 certificates Jan 28 01:23:45.177246 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 00ce1dc8bc64b61f07099b23b76dee034878817c' Jan 28 01:23:45.177253 kernel: Key type .fscrypt registered Jan 28 01:23:45.177260 kernel: Key type fscrypt-provisioning registered Jan 28 01:23:45.177268 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:23:45.177275 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:23:45.177282 kernel: ima: No architecture policies found Jan 28 01:23:45.177291 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 01:23:45.177299 kernel: clk: Disabling unused clocks Jan 28 01:23:45.177306 kernel: Freeing unused kernel memory: 39424K Jan 28 01:23:45.177313 kernel: Run /init as init process Jan 28 01:23:45.177320 kernel: with arguments: Jan 28 01:23:45.177327 kernel: /init Jan 28 01:23:45.177334 kernel: with environment: Jan 28 01:23:45.177341 kernel: HOME=/ Jan 28 01:23:45.177349 kernel: TERM=linux Jan 28 01:23:45.177358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:23:45.177369 systemd[1]: Detected virtualization microsoft. Jan 28 01:23:45.177377 systemd[1]: Detected architecture arm64. Jan 28 01:23:45.177385 systemd[1]: Running in initrd. Jan 28 01:23:45.177392 systemd[1]: No hostname configured, using default hostname. Jan 28 01:23:45.177400 systemd[1]: Hostname set to . Jan 28 01:23:45.177408 systemd[1]: Initializing machine ID from random generator. Jan 28 01:23:45.177418 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:23:45.177426 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:23:45.177434 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:23:45.177442 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:23:45.177451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:23:45.177459 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:23:45.177467 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:23:45.177476 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:23:45.177486 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:23:45.177494 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:23:45.177502 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:23:45.177509 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:23:45.177517 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:23:45.177525 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:23:45.177533 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:23:45.177541 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:23:45.177550 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:23:45.177558 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:23:45.177566 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:23:45.177574 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:23:45.177582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:23:45.177590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:23:45.177598 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:23:45.177606 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:23:45.177615 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:23:45.179523 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:23:45.179533 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:23:45.179541 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:23:45.179549 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:23:45.179586 systemd-journald[217]: Collecting audit messages is disabled. Jan 28 01:23:45.179612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:45.179630 systemd-journald[217]: Journal started Jan 28 01:23:45.179650 systemd-journald[217]: Runtime Journal (/run/log/journal/2629c1bddd274a0781cbad33420dcedf) is 8.0M, max 78.5M, 70.5M free. Jan 28 01:23:45.179712 systemd-modules-load[218]: Inserted module 'overlay' Jan 28 01:23:45.198409 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:23:45.193956 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:23:45.206684 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:23:45.227411 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:23:45.227435 kernel: Bridge firewalling registered Jan 28 01:23:45.221526 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 28 01:23:45.223590 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:23:45.231159 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:23:45.239590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:45.257906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:23:45.265762 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:23:45.280873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:23:45.304877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:23:45.311290 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:45.323013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:23:45.338229 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:23:45.343809 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:23:45.367038 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:23:45.376889 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:23:45.385809 dracut-cmdline[252]: dracut-dracut-053 Jan 28 01:23:45.390865 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:23:45.414789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:23:45.434944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:23:45.456406 systemd-resolved[259]: Positive Trust Anchors: Jan 28 01:23:45.456424 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:23:45.456456 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:23:45.458603 systemd-resolved[259]: Defaulting to hostname 'linux'. Jan 28 01:23:45.459458 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:23:45.464725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:23:45.540638 kernel: SCSI subsystem initialized Jan 28 01:23:45.547643 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:23:45.556702 kernel: iscsi: registered transport (tcp) Jan 28 01:23:45.572572 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:23:45.572603 kernel: QLogic iSCSI HBA Driver Jan 28 01:23:45.610564 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:23:45.622944 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:23:45.651829 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:23:45.651888 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:23:45.656709 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:23:45.704636 kernel: raid6: neonx8 gen() 15815 MB/s Jan 28 01:23:45.723625 kernel: raid6: neonx4 gen() 15689 MB/s Jan 28 01:23:45.742629 kernel: raid6: neonx2 gen() 13309 MB/s Jan 28 01:23:45.762630 kernel: raid6: neonx1 gen() 10548 MB/s Jan 28 01:23:45.781625 kernel: raid6: int64x8 gen() 6971 MB/s Jan 28 01:23:45.800641 kernel: raid6: int64x4 gen() 7362 MB/s Jan 28 01:23:45.820629 kernel: raid6: int64x2 gen() 6146 MB/s Jan 28 01:23:45.842035 kernel: raid6: int64x1 gen() 5072 MB/s Jan 28 01:23:45.842047 kernel: raid6: using algorithm neonx8 gen() 15815 MB/s Jan 28 01:23:45.864172 kernel: raid6: .... xor() 11956 MB/s, rmw enabled Jan 28 01:23:45.864183 kernel: raid6: using neon recovery algorithm Jan 28 01:23:45.873825 kernel: xor: measuring software checksum speed Jan 28 01:23:45.873879 kernel: 8regs : 19764 MB/sec Jan 28 01:23:45.876650 kernel: 32regs : 19552 MB/sec Jan 28 01:23:45.880233 kernel: arm64_neon : 27007 MB/sec Jan 28 01:23:45.883536 kernel: xor: using function: arm64_neon (27007 MB/sec) Jan 28 01:23:45.933638 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:23:45.942663 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:23:45.955735 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:23:45.974185 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jan 28 01:23:45.978431 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:23:45.991847 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:23:46.011550 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Jan 28 01:23:46.038752 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:23:46.052745 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:23:46.088206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:23:46.103866 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:23:46.125965 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:23:46.135693 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:23:46.152595 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:23:46.169049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:23:46.191068 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:23:46.207339 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:23:46.207493 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:46.226837 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:23:46.242678 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 01:23:46.242700 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 01:23:46.242711 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 01:23:46.231880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:46.281812 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 28 01:23:46.281842 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 28 01:23:46.281853 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 01:23:46.281862 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 01:23:46.232183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:46.299732 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 01:23:46.268309 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:46.308915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:46.328029 kernel: PTP clock support registered Jan 28 01:23:46.328051 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 01:23:46.328061 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 01:23:46.323315 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:23:46.354724 kernel: scsi host1: storvsc_host_t Jan 28 01:23:46.354897 kernel: scsi host0: storvsc_host_t Jan 28 01:23:46.354997 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 01:23:46.333954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:46.358963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:46.377292 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 01:23:46.359114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:46.372765 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:46.393924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:46.411458 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 01:23:46.411482 kernel: hv_vmbus: registering driver hv_utils Jan 28 01:23:46.418627 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 01:23:46.418675 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 01:23:46.747904 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 01:23:46.747942 kernel: hv_netvsc 7ced8d7a-2afc-7ced-8d7a-2afc7ced8d7a eth0: VF slot 1 added Jan 28 01:23:46.747863 systemd-resolved[259]: Clock change detected. Flushing caches. Jan 28 01:23:46.764118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:46.785317 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 01:23:46.785497 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:23:46.785509 kernel: hv_vmbus: registering driver hv_pci Jan 28 01:23:46.785518 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 01:23:46.794548 kernel: hv_pci 20ca457c-bf4f-4d10-9e63-6f409154064f: PCI VMBus probing: Using version 0x10004 Jan 28 01:23:46.796490 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:23:46.820398 kernel: hv_pci 20ca457c-bf4f-4d10-9e63-6f409154064f: PCI host bridge to bus bf4f:00 Jan 28 01:23:46.820574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#170 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:23:46.833883 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 01:23:46.834126 kernel: pci_bus bf4f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 01:23:46.834225 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 01:23:46.834311 kernel: pci_bus bf4f:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 01:23:46.843293 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 01:23:46.843529 kernel: pci bf4f:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 28 01:23:46.846759 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 01:23:46.853014 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 01:23:46.853204 kernel: pci bf4f:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:23:46.862304 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:46.888028 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:46.888053 kernel: pci bf4f:00:02.0: enabling Extended Tags Jan 28 01:23:46.888081 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 01:23:46.905543 kernel: pci bf4f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bf4f:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 28 01:23:46.905654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#99 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:23:46.916346 kernel: pci_bus bf4f:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 01:23:46.916631 kernel: pci bf4f:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:23:46.962483 kernel: mlx5_core bf4f:00:02.0: enabling device (0000 -> 0002) Jan 28 01:23:46.962740 kernel: mlx5_core bf4f:00:02.0: firmware version: 16.30.5026 Jan 28 01:23:47.161497 kernel: hv_netvsc 7ced8d7a-2afc-7ced-8d7a-2afc7ced8d7a eth0: VF registering: eth1 Jan 28 01:23:47.161689 kernel: mlx5_core bf4f:00:02.0 eth1: joined to eth0 Jan 28 01:23:47.166679 kernel: mlx5_core bf4f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 01:23:47.179500 kernel: mlx5_core bf4f:00:02.0 enP48975s1: renamed from eth1 Jan 28 01:23:47.458893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 01:23:47.517480 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (487) Jan 28 01:23:47.531559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 01:23:47.572490 kernel: BTRFS: device fsid 0fc26676-8036-4cd5-8c30-2943afb25b0b devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (485) Jan 28 01:23:47.585554 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 01:23:47.591333 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 28 01:23:47.615578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:23:47.635079 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 01:23:47.650472 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:47.658473 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:47.667475 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:48.669528 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:23:48.670066 disk-uuid[610]: The operation has completed successfully. Jan 28 01:23:48.737554 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:23:48.741290 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:23:48.761566 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:23:48.770948 sh[723]: Success Jan 28 01:23:48.798497 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 28 01:23:49.047940 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:23:49.069579 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:23:49.077623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:23:49.103693 kernel: BTRFS info (device dm-0): first mount of filesystem 0fc26676-8036-4cd5-8c30-2943afb25b0b Jan 28 01:23:49.103749 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:23:49.109078 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:23:49.113032 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:23:49.116281 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:23:49.432363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:23:49.436131 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:23:49.453653 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:23:49.463091 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:23:49.491711 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:49.491762 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:23:49.495167 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:23:49.549814 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:23:49.567652 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:23:49.570717 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:23:49.584236 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:23:49.593473 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:49.595052 systemd-networkd[899]: lo: Link UP Jan 28 01:23:49.595061 systemd-networkd[899]: lo: Gained carrier Jan 28 01:23:49.597157 systemd-networkd[899]: Enumeration completed Jan 28 01:23:49.597750 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:23:49.598026 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:23:49.598030 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:23:49.603625 systemd[1]: Reached target network.target - Network. Jan 28 01:23:49.618771 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:23:49.646732 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:23:49.697470 kernel: mlx5_core bf4f:00:02.0 enP48975s1: Link up Jan 28 01:23:49.733669 kernel: hv_netvsc 7ced8d7a-2afc-7ced-8d7a-2afc7ced8d7a eth0: Data path switched to VF: enP48975s1 Jan 28 01:23:49.733362 systemd-networkd[899]: enP48975s1: Link UP Jan 28 01:23:49.733443 systemd-networkd[899]: eth0: Link UP Jan 28 01:23:49.733558 systemd-networkd[899]: eth0: Gained carrier Jan 28 01:23:49.733566 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:23:49.739640 systemd-networkd[899]: enP48975s1: Gained carrier Jan 28 01:23:49.760493 systemd-networkd[899]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:23:50.582730 ignition[908]: Ignition 2.19.0 Jan 28 01:23:50.582742 ignition[908]: Stage: fetch-offline Jan 28 01:23:50.582778 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:50.587624 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:23:50.582786 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:50.582882 ignition[908]: parsed url from cmdline: "" Jan 28 01:23:50.582885 ignition[908]: no config URL provided Jan 28 01:23:50.582890 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:23:50.582896 ignition[908]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:23:50.582901 ignition[908]: failed to fetch config: resource requires networking Jan 28 01:23:50.618198 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 01:23:50.585866 ignition[908]: Ignition finished successfully Jan 28 01:23:50.631170 ignition[916]: Ignition 2.19.0 Jan 28 01:23:50.631177 ignition[916]: Stage: fetch Jan 28 01:23:50.631384 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:50.631397 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:50.633749 ignition[916]: parsed url from cmdline: "" Jan 28 01:23:50.633754 ignition[916]: no config URL provided Jan 28 01:23:50.633761 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:23:50.633774 ignition[916]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:23:50.633799 ignition[916]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 01:23:50.713510 ignition[916]: GET result: OK Jan 28 01:23:50.713606 ignition[916]: config has been read from IMDS userdata Jan 28 01:23:50.713657 ignition[916]: parsing config with SHA512: aac34cbb961cd82ef616ddef8776492266795d7ad4819dd90080a8b5dad6edd5a2ca29fec6f2f37e73a350aa050c3528726067e8835a5b5726910e8dd5f2f077 Jan 28 01:23:50.717130 unknown[916]: fetched base config from "system" Jan 28 01:23:50.717515 ignition[916]: fetch: fetch complete Jan 28 01:23:50.717137 unknown[916]: fetched base config from "system" Jan 28 01:23:50.717520 ignition[916]: fetch: fetch passed Jan 28 01:23:50.717142 unknown[916]: fetched user config from "azure" Jan 28 01:23:50.717563 ignition[916]: Ignition finished successfully Jan 28 01:23:50.721294 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 01:23:50.742646 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:23:50.762556 ignition[922]: Ignition 2.19.0 Jan 28 01:23:50.762564 ignition[922]: Stage: kargs Jan 28 01:23:50.766695 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:23:50.762733 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:50.762742 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:50.763765 ignition[922]: kargs: kargs passed Jan 28 01:23:50.763811 ignition[922]: Ignition finished successfully Jan 28 01:23:50.790601 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:23:50.805524 ignition[928]: Ignition 2.19.0 Jan 28 01:23:50.805533 ignition[928]: Stage: disks Jan 28 01:23:50.809524 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:23:50.805693 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:50.815848 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:23:50.805701 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:50.824199 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:23:50.806572 ignition[928]: disks: disks passed Jan 28 01:23:50.832859 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:23:50.806629 ignition[928]: Ignition finished successfully Jan 28 01:23:50.841404 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:23:50.850349 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:23:50.868599 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:23:50.949916 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 28 01:23:50.960624 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:23:50.972625 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:23:51.026488 kernel: EXT4-fs (sda9): mounted filesystem 2c7419f5-3bc3-4c5f-b132-f03585db88cd r/w with ordered data mode. Quota mode: none. Jan 28 01:23:51.026723 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:23:51.033636 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:23:51.075522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:23:51.105344 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Jan 28 01:23:51.105395 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:51.105412 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:23:51.108713 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:23:51.108608 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:23:51.117059 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 01:23:51.127744 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:23:51.145552 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:23:51.127776 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:23:51.137380 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:23:51.150191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:23:51.167658 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:23:51.637628 systemd-networkd[899]: eth0: Gained IPv6LL Jan 28 01:23:51.783827 coreos-metadata[963]: Jan 28 01:23:51.783 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:23:51.791213 coreos-metadata[963]: Jan 28 01:23:51.791 INFO Fetch successful Jan 28 01:23:51.795290 coreos-metadata[963]: Jan 28 01:23:51.795 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:23:51.813242 coreos-metadata[963]: Jan 28 01:23:51.813 INFO Fetch successful Jan 28 01:23:51.845936 coreos-metadata[963]: Jan 28 01:23:51.845 INFO wrote hostname ci-4081.3.6-n-20d4350ff0 to /sysroot/etc/hostname Jan 28 01:23:51.854495 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:23:52.137368 initrd-setup-root[977]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:23:52.174906 initrd-setup-root[984]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:23:52.196013 initrd-setup-root[991]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:23:52.215200 initrd-setup-root[998]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:23:53.552507 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:23:53.563678 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:23:53.569866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:23:53.588652 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:53.589003 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:23:53.612389 ignition[1066]: INFO : Ignition 2.19.0 Jan 28 01:23:53.616099 ignition[1066]: INFO : Stage: mount Jan 28 01:23:53.616099 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:53.616099 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:53.616099 ignition[1066]: INFO : mount: mount passed Jan 28 01:23:53.616099 ignition[1066]: INFO : Ignition finished successfully Jan 28 01:23:53.619005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:23:53.627049 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:23:53.646658 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:23:53.661727 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:23:53.680480 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Jan 28 01:23:53.690626 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:23:53.690661 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:23:53.693987 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:23:53.701494 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:23:53.702310 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:23:53.726834 ignition[1094]: INFO : Ignition 2.19.0 Jan 28 01:23:53.726834 ignition[1094]: INFO : Stage: files Jan 28 01:23:53.733862 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:53.733862 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:53.733862 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:23:53.733862 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:23:53.733862 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:23:54.009178 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:23:54.015023 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:23:54.015023 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:23:54.012036 unknown[1094]: wrote ssh authorized keys file for user: core Jan 28 01:23:54.057554 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 01:23:54.065819 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 28 01:23:54.110146 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:23:54.279519 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:23:54.317315 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 28 01:23:54.577697 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:23:54.950229 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:23:54.950229 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:23:54.983248 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:23:54.992722 ignition[1094]: INFO : files: files passed Jan 28 01:23:54.992722 ignition[1094]: INFO : Ignition finished successfully Jan 28 01:23:54.993409 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:23:55.017196 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:23:55.041608 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:23:55.047887 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:23:55.047978 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:23:55.084098 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:23:55.084098 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:23:55.097611 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:23:55.092692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:23:55.103594 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:23:55.125687 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:23:55.152021 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:23:55.156189 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:23:55.162236 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:23:55.171473 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:23:55.180426 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:23:55.191693 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:23:55.207598 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:23:55.220941 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:23:55.235810 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:23:55.241401 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:23:55.251026 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:23:55.259583 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:23:55.259745 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:23:55.271963 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:23:55.280688 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:23:55.288334 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:23:55.296115 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:23:55.305793 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:23:55.315854 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:23:55.324348 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:23:55.333505 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:23:55.342761 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:23:55.350880 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:23:55.358218 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:23:55.358381 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:23:55.369660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:23:55.378237 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:23:55.387444 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:23:55.387554 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:23:55.397546 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:23:55.397707 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:23:55.410943 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:23:55.411099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:23:55.420116 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:23:55.420265 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:23:55.428574 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 01:23:55.428714 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:23:55.453546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:23:55.475606 ignition[1146]: INFO : Ignition 2.19.0 Jan 28 01:23:55.475606 ignition[1146]: INFO : Stage: umount Jan 28 01:23:55.502317 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:55.502317 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:55.502317 ignition[1146]: INFO : umount: umount passed Jan 28 01:23:55.502317 ignition[1146]: INFO : Ignition finished successfully Jan 28 01:23:55.476815 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:23:55.482714 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:23:55.482924 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:23:55.488373 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:23:55.488588 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:23:55.502635 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:23:55.502733 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:23:55.517781 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:23:55.521279 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:23:55.521386 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:23:55.530793 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:23:55.530844 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:23:55.541155 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:23:55.541204 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:23:55.548784 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 01:23:55.548821 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 01:23:55.557577 systemd[1]: Stopped target network.target - Network. Jan 28 01:23:55.565643 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:23:55.565700 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:23:55.574801 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:23:55.582722 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:23:55.593829 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:23:55.599106 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:23:55.607950 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:23:55.611995 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:23:55.612051 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:23:55.621398 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:23:55.621441 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:23:55.629456 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:23:55.629511 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:23:55.637564 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:23:55.637599 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:23:55.645981 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:23:55.658058 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:23:55.666497 systemd-networkd[899]: eth0: DHCPv6 lease lost Jan 28 01:23:55.670500 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:23:55.670662 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:23:55.685383 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:23:55.685584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:23:55.694798 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:23:55.694848 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:23:55.717646 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:23:55.852304 kernel: hv_netvsc 7ced8d7a-2afc-7ced-8d7a-2afc7ced8d7a eth0: Data path switched from VF: enP48975s1 Jan 28 01:23:55.725052 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:23:55.725118 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:23:55.733973 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:23:55.734012 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:23:55.743264 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:23:55.743308 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:23:55.752163 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:23:55.752202 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:23:55.762161 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:23:55.794053 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:23:55.794255 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:23:55.803430 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:23:55.803543 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:23:55.812182 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:23:55.812217 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:23:55.820440 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:23:55.820539 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:23:55.832736 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:23:55.832782 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:23:55.856482 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:23:55.856540 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:55.880695 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:23:55.890750 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:23:55.890822 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:23:55.905935 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 01:23:55.905993 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:23:55.916970 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:23:55.917017 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:23:55.926148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:55.926190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:55.936771 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:23:55.936863 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:23:55.946431 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:23:55.946931 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:23:56.106035 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:23:56.106152 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:23:56.110709 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:23:56.119295 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:23:56.119357 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:23:56.141721 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:23:56.168145 systemd[1]: Switching root. Jan 28 01:23:56.235045 systemd-journald[217]: Journal stopped Jan 28 01:24:01.601373 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 28 01:24:01.601398 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:24:01.601408 kernel: SELinux: policy capability open_perms=1 Jan 28 01:24:01.601418 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:24:01.601426 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:24:01.601434 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:24:01.601443 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:24:01.601451 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:24:01.601468 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:24:01.601477 kernel: audit: type=1403 audit(1769563437.561:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 01:24:01.601488 systemd[1]: Successfully loaded SELinux policy in 197.017ms. Jan 28 01:24:01.601497 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.735ms. Jan 28 01:24:01.601507 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:24:01.601516 systemd[1]: Detected virtualization microsoft. Jan 28 01:24:01.601526 systemd[1]: Detected architecture arm64. Jan 28 01:24:01.601536 systemd[1]: Detected first boot. Jan 28 01:24:01.601546 systemd[1]: Hostname set to . Jan 28 01:24:01.601555 systemd[1]: Initializing machine ID from random generator. Jan 28 01:24:01.601565 zram_generator::config[1189]: No configuration found. Jan 28 01:24:01.601574 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:24:01.601583 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:24:01.601594 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:24:01.601603 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:24:01.601613 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:24:01.601622 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:24:01.601632 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:24:01.601641 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:24:01.601651 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:24:01.601662 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:24:01.601671 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:24:01.601681 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:24:01.601690 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:24:01.601699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:24:01.601709 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:24:01.601719 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:24:01.601728 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:24:01.601738 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:24:01.601750 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 28 01:24:01.601760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:24:01.601770 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:24:01.601782 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:24:01.601791 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:24:01.601801 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:24:01.601810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:24:01.601821 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:24:01.601831 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:24:01.601840 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:24:01.601850 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:24:01.601859 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:24:01.601869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:24:01.601878 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:24:01.601890 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:24:01.601899 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:24:01.601909 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:24:01.601919 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:24:01.601928 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:24:01.601938 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:24:01.601948 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:24:01.601959 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:24:01.601969 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:24:01.601979 systemd[1]: Reached target machines.target - Containers. Jan 28 01:24:01.601992 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:24:01.602001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:24:01.602012 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:24:01.602021 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:24:01.602033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:24:01.602043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:24:01.602053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:24:01.602062 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:24:01.602072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:24:01.602082 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:24:01.602092 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:24:01.602101 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:24:01.602111 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:24:01.602122 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:24:01.602131 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:24:01.602140 kernel: fuse: init (API version 7.39) Jan 28 01:24:01.602149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:24:01.602158 kernel: loop: module loaded Jan 28 01:24:01.602169 kernel: ACPI: bus type drm_connector registered Jan 28 01:24:01.602178 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:24:01.602188 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:24:01.602213 systemd-journald[1278]: Collecting audit messages is disabled. Jan 28 01:24:01.602235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:24:01.602245 systemd-journald[1278]: Journal started Jan 28 01:24:01.602266 systemd-journald[1278]: Runtime Journal (/run/log/journal/377f9c4d190743f6b0f8cfe3e55ce039) is 8.0M, max 78.5M, 70.5M free. Jan 28 01:24:00.666258 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:24:00.802641 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 28 01:24:00.802958 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:24:00.803248 systemd[1]: systemd-journald.service: Consumed 2.408s CPU time. Jan 28 01:24:01.619599 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 01:24:01.619670 systemd[1]: Stopped verity-setup.service. Jan 28 01:24:01.632213 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:24:01.636129 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:24:01.640717 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:24:01.645387 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:24:01.649488 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:24:01.654274 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:24:01.659168 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:24:01.663626 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:24:01.668920 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:24:01.674530 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:24:01.674656 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:24:01.679964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:24:01.680089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:24:01.685210 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:24:01.685331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:24:01.690355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:24:01.690485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:24:01.696773 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:24:01.696888 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:24:01.701742 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:24:01.701868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:24:01.707388 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:24:01.713007 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:24:01.718669 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:24:01.724491 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:24:01.737708 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:24:01.748525 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:24:01.754364 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:24:01.759070 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:24:01.759101 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:24:01.764353 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 01:24:01.770720 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:24:01.776440 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:24:01.780752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:24:01.782139 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:24:01.788730 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:24:01.793899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:24:01.795127 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:24:01.800220 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:24:01.802673 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:24:01.810677 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:24:01.821178 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:24:01.830533 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 01:24:01.838955 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:24:01.844095 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:24:01.849719 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:24:01.855397 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:24:01.864103 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:24:01.873687 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 01:24:01.878345 systemd-journald[1278]: Time spent on flushing to /var/log/journal/377f9c4d190743f6b0f8cfe3e55ce039 is 12.759ms for 903 entries. Jan 28 01:24:01.878345 systemd-journald[1278]: System Journal (/var/log/journal/377f9c4d190743f6b0f8cfe3e55ce039) is 8.0M, max 2.6G, 2.6G free. Jan 28 01:24:01.908043 systemd-journald[1278]: Received client request to flush runtime journal. Jan 28 01:24:01.908077 kernel: loop0: detected capacity change from 0 to 114328 Jan 28 01:24:01.885838 udevadm[1326]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 28 01:24:01.909865 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:24:01.949563 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:24:01.950385 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:24:01.955641 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 01:24:02.001601 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jan 28 01:24:02.001616 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jan 28 01:24:02.006012 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:24:02.020598 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:24:02.130652 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:24:02.142864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:24:02.158729 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jan 28 01:24:02.158743 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jan 28 01:24:02.162567 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:24:02.464480 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:24:02.502493 kernel: loop1: detected capacity change from 0 to 114432 Jan 28 01:24:02.586262 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:24:02.595650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:24:02.620136 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Jan 28 01:24:02.786378 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:24:02.803249 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:24:02.839598 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 28 01:24:02.868986 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:24:02.955496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#183 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:24:02.965505 kernel: hv_vmbus: registering driver hv_balloon Jan 28 01:24:02.965535 kernel: loop2: detected capacity change from 0 to 31320 Jan 28 01:24:02.964202 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:24:02.988061 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 28 01:24:02.988140 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 28 01:24:03.004739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:24:03.020238 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:24:03.020306 kernel: hv_vmbus: registering driver hyperv_fb Jan 28 01:24:03.029558 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 28 01:24:03.035359 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 28 01:24:03.040329 kernel: Console: switching to colour dummy device 80x25 Jan 28 01:24:03.037269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:24:03.037588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:24:03.046497 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:24:03.059598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:24:03.072239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:24:03.073602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:24:03.086649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:24:03.099490 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1372) Jan 28 01:24:03.128311 systemd-networkd[1361]: lo: Link UP Jan 28 01:24:03.128622 systemd-networkd[1361]: lo: Gained carrier Jan 28 01:24:03.131503 systemd-networkd[1361]: Enumeration completed Jan 28 01:24:03.131688 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:24:03.132043 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:24:03.132114 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:24:03.152281 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:24:03.159204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 01:24:03.165185 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:24:03.193478 kernel: mlx5_core bf4f:00:02.0 enP48975s1: Link up Jan 28 01:24:03.218885 kernel: hv_netvsc 7ced8d7a-2afc-7ced-8d7a-2afc7ced8d7a eth0: Data path switched to VF: enP48975s1 Jan 28 01:24:03.219633 systemd-networkd[1361]: enP48975s1: Link UP Jan 28 01:24:03.219717 systemd-networkd[1361]: eth0: Link UP Jan 28 01:24:03.219721 systemd-networkd[1361]: eth0: Gained carrier Jan 28 01:24:03.219734 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:24:03.224289 systemd-networkd[1361]: enP48975s1: Gained carrier Jan 28 01:24:03.226358 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:24:03.235534 systemd-networkd[1361]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:24:03.418397 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 01:24:03.427588 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 01:24:03.459793 kernel: loop3: detected capacity change from 0 to 207008 Jan 28 01:24:03.483781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:24:03.496495 kernel: loop4: detected capacity change from 0 to 114328 Jan 28 01:24:03.511480 kernel: loop5: detected capacity change from 0 to 114432 Jan 28 01:24:03.516475 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:24:03.533707 kernel: loop6: detected capacity change from 0 to 31320 Jan 28 01:24:03.533790 kernel: I/O error, dev loop6, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 2 Jan 28 01:24:03.538505 kernel: loop7: detected capacity change from 0 to 207008 Jan 28 01:24:03.548370 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 01:24:03.549807 (sd-merge)[1454]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 28 01:24:03.550238 (sd-merge)[1454]: Merged extensions into '/usr'. Jan 28 01:24:03.558093 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:24:03.572676 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 01:24:03.577452 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:24:03.578173 systemd[1]: Reloading requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:24:03.578253 systemd[1]: Reloading... Jan 28 01:24:03.639507 zram_generator::config[1488]: No configuration found. Jan 28 01:24:03.759088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:24:03.831637 systemd[1]: Reloading finished in 252 ms. Jan 28 01:24:03.861840 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 01:24:03.869092 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:24:03.882610 systemd[1]: Starting ensure-sysext.service... Jan 28 01:24:03.887234 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:24:03.895642 systemd[1]: Reloading requested from client PID 1541 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:24:03.895656 systemd[1]: Reloading... Jan 28 01:24:03.921087 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:24:03.922037 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 01:24:03.922743 systemd-tmpfiles[1542]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 01:24:03.922959 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Jan 28 01:24:03.923007 systemd-tmpfiles[1542]: ACLs are not supported, ignoring. Jan 28 01:24:03.942347 systemd-tmpfiles[1542]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:24:03.942503 systemd-tmpfiles[1542]: Skipping /boot Jan 28 01:24:03.953216 systemd-tmpfiles[1542]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:24:03.954150 systemd-tmpfiles[1542]: Skipping /boot Jan 28 01:24:03.974603 zram_generator::config[1572]: No configuration found. Jan 28 01:24:04.078829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:24:04.157002 systemd[1]: Reloading finished in 261 ms. Jan 28 01:24:04.175342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:24:04.192640 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:24:04.200719 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:24:04.208883 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:24:04.216723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:24:04.223635 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:24:04.234274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:24:04.241425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:24:04.253561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:24:04.270766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:24:04.276592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:24:04.277762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:24:04.279617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:24:04.289702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:24:04.289852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:24:04.302227 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:24:04.303410 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:24:04.313973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:24:04.320650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:24:04.327964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:24:04.341288 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:24:04.346953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:24:04.347888 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:24:04.355280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:24:04.362351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:24:04.362508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:24:04.368154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:24:04.368318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:24:04.368752 systemd-resolved[1634]: Positive Trust Anchors: Jan 28 01:24:04.368762 systemd-resolved[1634]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:24:04.368797 systemd-resolved[1634]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:24:04.374918 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:24:04.375040 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:24:04.386978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:24:04.394551 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:24:04.401734 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:24:04.411710 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:24:04.411916 augenrules[1665]: No rules Jan 28 01:24:04.427746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:24:04.434382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:24:04.434754 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:24:04.443160 systemd-resolved[1634]: Using system hostname 'ci-4081.3.6-n-20d4350ff0'. Jan 28 01:24:04.445483 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:24:04.454149 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:24:04.460389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:24:04.460565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:24:04.466347 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:24:04.466536 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:24:04.472387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:24:04.472528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:24:04.480853 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:24:04.481006 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:24:04.488670 systemd[1]: Finished ensure-sysext.service. Jan 28 01:24:04.499143 systemd[1]: Reached target network.target - Network. Jan 28 01:24:04.503795 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:24:04.509635 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:24:04.509717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:24:04.977527 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:24:04.983617 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:24:05.141716 systemd-networkd[1361]: eth0: Gained IPv6LL Jan 28 01:24:05.144517 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:24:05.150785 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:24:08.135454 ldconfig[1318]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:24:08.147901 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:24:08.157668 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:24:08.170185 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:24:08.175386 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:24:08.180026 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:24:08.185361 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:24:08.190924 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:24:08.195521 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:24:08.201126 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:24:08.206834 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:24:08.206866 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:24:08.210815 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:24:08.216044 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:24:08.222092 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:24:08.230016 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:24:08.234979 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:24:08.239591 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:24:08.243604 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:24:08.247882 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:24:08.247907 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:24:08.250229 systemd[1]: Starting chronyd.service - NTP client/server... Jan 28 01:24:08.256601 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:24:08.269594 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 01:24:08.283018 (chronyd)[1686]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 28 01:24:08.288708 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:24:08.293808 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:24:08.299259 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:24:08.303818 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:24:08.303855 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 28 01:24:08.305674 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 28 01:24:08.311917 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 28 01:24:08.313365 chronyd[1697]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 28 01:24:08.319645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:08.321874 KVP[1694]: KVP starting; pid is:1694 Jan 28 01:24:08.324414 jq[1692]: false Jan 28 01:24:08.326540 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:24:08.332649 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:24:08.343344 KVP[1694]: KVP LIC Version: 3.1 Jan 28 01:24:08.343701 kernel: hv_utils: KVP IC version 4.0 Jan 28 01:24:08.346616 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:24:08.352747 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:24:08.360419 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:24:08.367666 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:24:08.368840 chronyd[1697]: Timezone right/UTC failed leap second check, ignoring Jan 28 01:24:08.375543 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:24:08.376014 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:24:08.377275 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:24:08.381043 chronyd[1697]: Loaded seccomp filter (level 2) Jan 28 01:24:08.392579 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:24:08.396584 extend-filesystems[1693]: Found loop4 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found loop5 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found loop6 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found loop7 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found sda Jan 28 01:24:08.396584 extend-filesystems[1693]: Found sda1 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found sda2 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found sda3 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found usr Jan 28 01:24:08.396584 extend-filesystems[1693]: Found sda4 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found sda6 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found sda7 Jan 28 01:24:08.396584 extend-filesystems[1693]: Found sda9 Jan 28 01:24:08.396584 extend-filesystems[1693]: Checking size of /dev/sda9 Jan 28 01:24:08.401790 systemd[1]: Started chronyd.service - NTP client/server. Jan 28 01:24:08.648665 coreos-metadata[1688]: Jan 28 01:24:08.544 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:24:08.648665 coreos-metadata[1688]: Jan 28 01:24:08.552 INFO Fetch successful Jan 28 01:24:08.648665 coreos-metadata[1688]: Jan 28 01:24:08.552 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 28 01:24:08.648665 coreos-metadata[1688]: Jan 28 01:24:08.558 INFO Fetch successful Jan 28 01:24:08.648665 coreos-metadata[1688]: Jan 28 01:24:08.558 INFO Fetching http://168.63.129.16/machine/7c5cea59-60d2-4145-972f-82e1a4d0936c/f22921c5%2D3f66%2D44ef%2D9a07%2D8eb860b837a7.%5Fci%2D4081.3.6%2Dn%2D20d4350ff0?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 28 01:24:08.648665 coreos-metadata[1688]: Jan 28 01:24:08.560 INFO Fetch successful Jan 28 01:24:08.648665 coreos-metadata[1688]: Jan 28 01:24:08.560 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:24:08.648665 coreos-metadata[1688]: Jan 28 01:24:08.573 INFO Fetch successful Jan 28 01:24:08.648873 extend-filesystems[1693]: Old size kept for /dev/sda9 Jan 28 01:24:08.648873 extend-filesystems[1693]: Found sr0 Jan 28 01:24:08.437835 dbus-daemon[1691]: [system] SELinux support is enabled Jan 28 01:24:08.414707 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:24:08.697250 update_engine[1708]: I20260128 01:24:08.500720 1708 main.cc:92] Flatcar Update Engine starting Jan 28 01:24:08.697250 update_engine[1708]: I20260128 01:24:08.502014 1708 update_check_scheduler.cc:74] Next update check in 4m44s Jan 28 01:24:08.417503 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:24:08.424936 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:24:08.699587 jq[1712]: true Jan 28 01:24:08.425131 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:24:08.435890 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:24:08.436074 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:24:08.704277 tar[1721]: linux-arm64/LICENSE Jan 28 01:24:08.704277 tar[1721]: linux-arm64/helm Jan 28 01:24:08.449146 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:24:08.480112 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:24:08.706351 jq[1725]: true Jan 28 01:24:08.480155 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:24:08.485830 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 01:24:08.517918 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:24:08.517966 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:24:08.533292 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:24:08.533529 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:24:08.543677 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:24:08.568195 systemd-logind[1706]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:24:08.569893 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:24:08.571072 systemd-logind[1706]: New seat seat0. Jan 28 01:24:08.581913 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:24:08.602698 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:24:08.659854 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 01:24:08.681861 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:24:08.764477 bash[1774]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:24:08.757816 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:24:08.769898 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:24:08.797494 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1763) Jan 28 01:24:08.896183 locksmithd[1755]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:24:09.309740 containerd[1726]: time="2026-01-28T01:24:09.309605220Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 01:24:09.354672 containerd[1726]: time="2026-01-28T01:24:09.354625820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.358784260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.358820700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.358838140Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.359002020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.359018100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.359083860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.359096220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.359247780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.359261340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.359274220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:24:09.359924 containerd[1726]: time="2026-01-28T01:24:09.359283660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 01:24:09.360238 containerd[1726]: time="2026-01-28T01:24:09.359345060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:24:09.360238 containerd[1726]: time="2026-01-28T01:24:09.359546140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:24:09.360238 containerd[1726]: time="2026-01-28T01:24:09.359640580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:24:09.360238 containerd[1726]: time="2026-01-28T01:24:09.359653900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 01:24:09.360238 containerd[1726]: time="2026-01-28T01:24:09.359729620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 01:24:09.360238 containerd[1726]: time="2026-01-28T01:24:09.359765500Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.388831700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.388895540Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.388911420Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.388936180Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.388953980Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.389117460Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.389340860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.389437300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.389452740Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.389483140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.389497060Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.389509740Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.389522700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 01:24:09.389990 containerd[1726]: time="2026-01-28T01:24:09.389537620Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389552500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389565140Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389578020Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389590580Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389610500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389624540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389636940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389650580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389663020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389675540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389687100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389699420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389711540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390350 containerd[1726]: time="2026-01-28T01:24:09.389724900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390598 containerd[1726]: time="2026-01-28T01:24:09.389735460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390598 containerd[1726]: time="2026-01-28T01:24:09.389746660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390598 containerd[1726]: time="2026-01-28T01:24:09.389758580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390598 containerd[1726]: time="2026-01-28T01:24:09.389773660Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 01:24:09.390598 containerd[1726]: time="2026-01-28T01:24:09.389816220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390598 containerd[1726]: time="2026-01-28T01:24:09.389828900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.390598 containerd[1726]: time="2026-01-28T01:24:09.389840060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 01:24:09.395467 containerd[1726]: time="2026-01-28T01:24:09.393302300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 01:24:09.395467 containerd[1726]: time="2026-01-28T01:24:09.393344300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 01:24:09.395467 containerd[1726]: time="2026-01-28T01:24:09.393356900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 01:24:09.395467 containerd[1726]: time="2026-01-28T01:24:09.393371100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 01:24:09.395467 containerd[1726]: time="2026-01-28T01:24:09.393380540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.395467 containerd[1726]: time="2026-01-28T01:24:09.393392820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 01:24:09.395467 containerd[1726]: time="2026-01-28T01:24:09.393402460Z" level=info msg="NRI interface is disabled by configuration." Jan 28 01:24:09.395467 containerd[1726]: time="2026-01-28T01:24:09.393413420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 01:24:09.397296 containerd[1726]: time="2026-01-28T01:24:09.396025500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 01:24:09.397296 containerd[1726]: time="2026-01-28T01:24:09.396337860Z" level=info msg="Connect containerd service" Jan 28 01:24:09.397296 containerd[1726]: time="2026-01-28T01:24:09.396387940Z" level=info msg="using legacy CRI server" Jan 28 01:24:09.397296 containerd[1726]: time="2026-01-28T01:24:09.396396860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:24:09.397505 containerd[1726]: time="2026-01-28T01:24:09.397368420Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 01:24:09.398078 containerd[1726]: time="2026-01-28T01:24:09.398050540Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:24:09.398704 containerd[1726]: time="2026-01-28T01:24:09.398666900Z" level=info msg="Start subscribing containerd event" Jan 28 01:24:09.398849 containerd[1726]: time="2026-01-28T01:24:09.398834020Z" level=info msg="Start recovering state" Jan 28 01:24:09.398970 containerd[1726]: time="2026-01-28T01:24:09.398956620Z" level=info msg="Start event monitor" Jan 28 01:24:09.399024 containerd[1726]: time="2026-01-28T01:24:09.399012980Z" level=info msg="Start snapshots syncer" Jan 28 01:24:09.399336 containerd[1726]: time="2026-01-28T01:24:09.399314220Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:24:09.399396 containerd[1726]: time="2026-01-28T01:24:09.399384780Z" level=info msg="Start streaming server" Jan 28 01:24:09.399703 containerd[1726]: time="2026-01-28T01:24:09.399683980Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:24:09.400268 containerd[1726]: time="2026-01-28T01:24:09.400244420Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:24:09.400435 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:24:09.408692 containerd[1726]: time="2026-01-28T01:24:09.408470260Z" level=info msg="containerd successfully booted in 0.101547s" Jan 28 01:24:09.433948 sshd_keygen[1719]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:24:09.466645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:09.472143 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:24:09.475579 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:24:09.491935 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:24:09.492220 tar[1721]: linux-arm64/README.md Jan 28 01:24:09.500594 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 28 01:24:09.516752 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:24:09.518526 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:24:09.526011 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:24:09.541129 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:24:09.548865 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 28 01:24:09.560685 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:24:09.571760 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:24:09.584153 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 28 01:24:09.589501 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:24:09.593704 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:24:09.598833 systemd[1]: Startup finished in 602ms (kernel) + 12.325s (initrd) + 12.232s (userspace) = 25.160s. Jan 28 01:24:09.931551 kubelet[1831]: E0128 01:24:09.931449 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:24:09.934193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:24:09.934332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:24:10.020725 login[1850]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 28 01:24:10.022678 login[1851]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:10.030139 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:24:10.039683 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:24:10.041841 systemd-logind[1706]: New session 2 of user core. Jan 28 01:24:10.063589 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:24:10.069690 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:24:10.073279 (systemd)[1864]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 01:24:10.215173 systemd[1864]: Queued start job for default target default.target. Jan 28 01:24:10.222283 systemd[1864]: Created slice app.slice - User Application Slice. Jan 28 01:24:10.222309 systemd[1864]: Reached target paths.target - Paths. Jan 28 01:24:10.222321 systemd[1864]: Reached target timers.target - Timers. Jan 28 01:24:10.223477 systemd[1864]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:24:10.234321 systemd[1864]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:24:10.234424 systemd[1864]: Reached target sockets.target - Sockets. Jan 28 01:24:10.234437 systemd[1864]: Reached target basic.target - Basic System. Jan 28 01:24:10.234498 systemd[1864]: Reached target default.target - Main User Target. Jan 28 01:24:10.234525 systemd[1864]: Startup finished in 155ms. Jan 28 01:24:10.234758 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:24:10.236737 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 01:24:11.021066 login[1850]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:11.024796 systemd-logind[1706]: New session 1 of user core. Jan 28 01:24:11.036871 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:24:11.326477 waagent[1847]: 2026-01-28T01:24:11.325219Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 28 01:24:11.329998 waagent[1847]: 2026-01-28T01:24:11.329946Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 28 01:24:11.333725 waagent[1847]: 2026-01-28T01:24:11.333688Z INFO Daemon Daemon Python: 3.11.9 Jan 28 01:24:11.337247 waagent[1847]: 2026-01-28T01:24:11.337203Z INFO Daemon Daemon Run daemon Jan 28 01:24:11.340734 waagent[1847]: 2026-01-28T01:24:11.340692Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 28 01:24:11.347959 waagent[1847]: 2026-01-28T01:24:11.347923Z INFO Daemon Daemon Using waagent for provisioning Jan 28 01:24:11.352312 waagent[1847]: 2026-01-28T01:24:11.352279Z INFO Daemon Daemon Activate resource disk Jan 28 01:24:11.356316 waagent[1847]: 2026-01-28T01:24:11.356282Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 28 01:24:11.365887 waagent[1847]: 2026-01-28T01:24:11.365845Z INFO Daemon Daemon Found device: None Jan 28 01:24:11.369807 waagent[1847]: 2026-01-28T01:24:11.369773Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 28 01:24:11.376749 waagent[1847]: 2026-01-28T01:24:11.376713Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 28 01:24:11.387776 waagent[1847]: 2026-01-28T01:24:11.387730Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 01:24:11.392464 waagent[1847]: 2026-01-28T01:24:11.392426Z INFO Daemon Daemon Running default provisioning handler Jan 28 01:24:11.402922 waagent[1847]: 2026-01-28T01:24:11.402863Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 28 01:24:11.413817 waagent[1847]: 2026-01-28T01:24:11.413767Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 28 01:24:11.421661 waagent[1847]: 2026-01-28T01:24:11.421619Z INFO Daemon Daemon cloud-init is enabled: False Jan 28 01:24:11.425715 waagent[1847]: 2026-01-28T01:24:11.425679Z INFO Daemon Daemon Copying ovf-env.xml Jan 28 01:24:11.556352 waagent[1847]: 2026-01-28T01:24:11.556260Z INFO Daemon Daemon Successfully mounted dvd Jan 28 01:24:11.592868 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 28 01:24:11.596474 waagent[1847]: 2026-01-28T01:24:11.595004Z INFO Daemon Daemon Detect protocol endpoint Jan 28 01:24:11.599120 waagent[1847]: 2026-01-28T01:24:11.599076Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 01:24:11.603783 waagent[1847]: 2026-01-28T01:24:11.603747Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 28 01:24:11.609349 waagent[1847]: 2026-01-28T01:24:11.609310Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 28 01:24:11.613695 waagent[1847]: 2026-01-28T01:24:11.613658Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 28 01:24:11.617974 waagent[1847]: 2026-01-28T01:24:11.617939Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 28 01:24:11.680083 waagent[1847]: 2026-01-28T01:24:11.680043Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 28 01:24:11.685509 waagent[1847]: 2026-01-28T01:24:11.685486Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 28 01:24:11.690020 waagent[1847]: 2026-01-28T01:24:11.689988Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 28 01:24:11.801574 waagent[1847]: 2026-01-28T01:24:11.801480Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 28 01:24:11.807104 waagent[1847]: 2026-01-28T01:24:11.807053Z INFO Daemon Daemon Forcing an update of the goal state. Jan 28 01:24:11.814674 waagent[1847]: 2026-01-28T01:24:11.814626Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 01:24:12.136160 waagent[1847]: 2026-01-28T01:24:12.136117Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 28 01:24:12.141383 waagent[1847]: 2026-01-28T01:24:12.141339Z INFO Daemon Jan 28 01:24:12.143968 waagent[1847]: 2026-01-28T01:24:12.143933Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b636aecc-387f-495c-93d3-0c2b1bbba311 eTag: 15768314952330837050 source: Fabric] Jan 28 01:24:12.155134 waagent[1847]: 2026-01-28T01:24:12.155092Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 28 01:24:12.161388 waagent[1847]: 2026-01-28T01:24:12.161348Z INFO Daemon Jan 28 01:24:12.163763 waagent[1847]: 2026-01-28T01:24:12.163731Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 28 01:24:12.172517 waagent[1847]: 2026-01-28T01:24:12.172486Z INFO Daemon Daemon Downloading artifacts profile blob Jan 28 01:24:12.246254 waagent[1847]: 2026-01-28T01:24:12.246170Z INFO Daemon Downloaded certificate {'thumbprint': 'ACBE8BB2479E8F20F3EB0190BE29E3419097277D', 'hasPrivateKey': True} Jan 28 01:24:12.254706 waagent[1847]: 2026-01-28T01:24:12.254661Z INFO Daemon Fetch goal state completed Jan 28 01:24:12.264593 waagent[1847]: 2026-01-28T01:24:12.264553Z INFO Daemon Daemon Starting provisioning Jan 28 01:24:12.268753 waagent[1847]: 2026-01-28T01:24:12.268710Z INFO Daemon Daemon Handle ovf-env.xml. Jan 28 01:24:12.272887 waagent[1847]: 2026-01-28T01:24:12.272854Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-20d4350ff0] Jan 28 01:24:12.282475 waagent[1847]: 2026-01-28T01:24:12.279657Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-20d4350ff0] Jan 28 01:24:12.284957 waagent[1847]: 2026-01-28T01:24:12.284912Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 28 01:24:12.290303 waagent[1847]: 2026-01-28T01:24:12.290263Z INFO Daemon Daemon Primary interface is [eth0] Jan 28 01:24:12.322661 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:24:12.322668 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:24:12.322709 systemd-networkd[1361]: eth0: DHCP lease lost Jan 28 01:24:12.323933 waagent[1847]: 2026-01-28T01:24:12.323860Z INFO Daemon Daemon Create user account if not exists Jan 28 01:24:12.328573 waagent[1847]: 2026-01-28T01:24:12.328528Z INFO Daemon Daemon User core already exists, skip useradd Jan 28 01:24:12.333331 waagent[1847]: 2026-01-28T01:24:12.333291Z INFO Daemon Daemon Configure sudoer Jan 28 01:24:12.337258 waagent[1847]: 2026-01-28T01:24:12.337214Z INFO Daemon Daemon Configure sshd Jan 28 01:24:12.337564 systemd-networkd[1361]: eth0: DHCPv6 lease lost Jan 28 01:24:12.340751 waagent[1847]: 2026-01-28T01:24:12.340702Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 28 01:24:12.350602 waagent[1847]: 2026-01-28T01:24:12.350559Z INFO Daemon Daemon Deploy ssh public key. Jan 28 01:24:12.369506 systemd-networkd[1361]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:24:13.447974 waagent[1847]: 2026-01-28T01:24:13.447926Z INFO Daemon Daemon Provisioning complete Jan 28 01:24:13.462899 waagent[1847]: 2026-01-28T01:24:13.462857Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 28 01:24:13.467612 waagent[1847]: 2026-01-28T01:24:13.467565Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 28 01:24:13.475452 waagent[1847]: 2026-01-28T01:24:13.475406Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 28 01:24:13.603480 waagent[1913]: 2026-01-28T01:24:13.602993Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 28 01:24:13.603480 waagent[1913]: 2026-01-28T01:24:13.603143Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 28 01:24:13.603480 waagent[1913]: 2026-01-28T01:24:13.603194Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 28 01:24:14.228482 waagent[1913]: 2026-01-28T01:24:14.226405Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 28 01:24:14.228482 waagent[1913]: 2026-01-28T01:24:14.226663Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:24:14.228482 waagent[1913]: 2026-01-28T01:24:14.226725Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:24:14.240560 waagent[1913]: 2026-01-28T01:24:14.240484Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 01:24:14.246155 waagent[1913]: 2026-01-28T01:24:14.246108Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 28 01:24:14.246791 waagent[1913]: 2026-01-28T01:24:14.246752Z INFO ExtHandler Jan 28 01:24:14.246956 waagent[1913]: 2026-01-28T01:24:14.246924Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0fc90368-e9eb-4ae3-9f91-d79bfa688394 eTag: 15768314952330837050 source: Fabric] Jan 28 01:24:14.247321 waagent[1913]: 2026-01-28T01:24:14.247284Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 28 01:24:14.251548 waagent[1913]: 2026-01-28T01:24:14.251501Z INFO ExtHandler Jan 28 01:24:14.251693 waagent[1913]: 2026-01-28T01:24:14.251663Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 28 01:24:14.255127 waagent[1913]: 2026-01-28T01:24:14.255098Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 28 01:24:14.361845 waagent[1913]: 2026-01-28T01:24:14.361736Z INFO ExtHandler Downloaded certificate {'thumbprint': 'ACBE8BB2479E8F20F3EB0190BE29E3419097277D', 'hasPrivateKey': True} Jan 28 01:24:14.362581 waagent[1913]: 2026-01-28T01:24:14.362541Z INFO ExtHandler Fetch goal state completed Jan 28 01:24:14.376768 waagent[1913]: 2026-01-28T01:24:14.376683Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1913 Jan 28 01:24:14.377084 waagent[1913]: 2026-01-28T01:24:14.377047Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 28 01:24:14.378718 waagent[1913]: 2026-01-28T01:24:14.378679Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 28 01:24:14.379149 waagent[1913]: 2026-01-28T01:24:14.379114Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 28 01:24:14.434367 waagent[1913]: 2026-01-28T01:24:14.434329Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 28 01:24:14.434746 waagent[1913]: 2026-01-28T01:24:14.434708Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 28 01:24:14.440639 waagent[1913]: 2026-01-28T01:24:14.440447Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 28 01:24:14.447274 systemd[1]: Reloading requested from client PID 1926 ('systemctl') (unit waagent.service)... Jan 28 01:24:14.447577 systemd[1]: Reloading... Jan 28 01:24:14.524870 zram_generator::config[1958]: No configuration found. Jan 28 01:24:14.626970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:24:14.700781 systemd[1]: Reloading finished in 252 ms. Jan 28 01:24:14.720678 waagent[1913]: 2026-01-28T01:24:14.720573Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 28 01:24:14.726731 systemd[1]: Reloading requested from client PID 2014 ('systemctl') (unit waagent.service)... Jan 28 01:24:14.726743 systemd[1]: Reloading... Jan 28 01:24:14.798696 zram_generator::config[2051]: No configuration found. Jan 28 01:24:14.893731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:24:14.968198 systemd[1]: Reloading finished in 241 ms. Jan 28 01:24:14.995085 waagent[1913]: 2026-01-28T01:24:14.994808Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 28 01:24:14.995085 waagent[1913]: 2026-01-28T01:24:14.994964Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 28 01:24:15.398204 waagent[1913]: 2026-01-28T01:24:15.398121Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 28 01:24:15.398777 waagent[1913]: 2026-01-28T01:24:15.398734Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 28 01:24:15.399539 waagent[1913]: 2026-01-28T01:24:15.399455Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 28 01:24:15.399933 waagent[1913]: 2026-01-28T01:24:15.399821Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 28 01:24:15.400428 waagent[1913]: 2026-01-28T01:24:15.400309Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 28 01:24:15.400505 waagent[1913]: 2026-01-28T01:24:15.400418Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 28 01:24:15.401042 waagent[1913]: 2026-01-28T01:24:15.400906Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 28 01:24:15.401042 waagent[1913]: 2026-01-28T01:24:15.401000Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:24:15.401127 waagent[1913]: 2026-01-28T01:24:15.401041Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 28 01:24:15.401287 waagent[1913]: 2026-01-28T01:24:15.401226Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:24:15.402540 waagent[1913]: 2026-01-28T01:24:15.401657Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 28 01:24:15.402692 waagent[1913]: 2026-01-28T01:24:15.402652Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:24:15.402828 waagent[1913]: 2026-01-28T01:24:15.402799Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:24:15.403049 waagent[1913]: 2026-01-28T01:24:15.403010Z INFO EnvHandler ExtHandler Configure routes Jan 28 01:24:15.403178 waagent[1913]: 2026-01-28T01:24:15.403147Z INFO EnvHandler ExtHandler Gateway:None Jan 28 01:24:15.403443 waagent[1913]: 2026-01-28T01:24:15.403405Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 28 01:24:15.403961 waagent[1913]: 2026-01-28T01:24:15.403910Z INFO EnvHandler ExtHandler Routes:None Jan 28 01:24:15.405664 waagent[1913]: 2026-01-28T01:24:15.405619Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 28 01:24:15.405664 waagent[1913]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 28 01:24:15.405664 waagent[1913]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 28 01:24:15.405664 waagent[1913]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 28 01:24:15.405664 waagent[1913]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:24:15.405664 waagent[1913]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:24:15.405664 waagent[1913]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:24:15.407983 waagent[1913]: 2026-01-28T01:24:15.407942Z INFO ExtHandler ExtHandler Jan 28 01:24:15.408350 waagent[1913]: 2026-01-28T01:24:15.408308Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4dcf7d30-0419-441f-9dc3-860d8c02926d correlation 66958152-bd53-42a3-a1c5-af76b2c90f9b created: 2026-01-28T01:23:13.268608Z] Jan 28 01:24:15.409485 waagent[1913]: 2026-01-28T01:24:15.409419Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 28 01:24:15.411018 waagent[1913]: 2026-01-28T01:24:15.410979Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jan 28 01:24:15.441199 waagent[1913]: 2026-01-28T01:24:15.441140Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 022913F4-E019-44F9-8241-B28D0430D8C1;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 28 01:24:15.463025 waagent[1913]: 2026-01-28T01:24:15.462632Z INFO MonitorHandler ExtHandler Network interfaces: Jan 28 01:24:15.463025 waagent[1913]: Executing ['ip', '-a', '-o', 'link']: Jan 28 01:24:15.463025 waagent[1913]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 28 01:24:15.463025 waagent[1913]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:7a:2a:fc brd ff:ff:ff:ff:ff:ff Jan 28 01:24:15.463025 waagent[1913]: 3: enP48975s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:7a:2a:fc brd ff:ff:ff:ff:ff:ff\ altname enP48975p0s2 Jan 28 01:24:15.463025 waagent[1913]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 28 01:24:15.463025 waagent[1913]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 28 01:24:15.463025 waagent[1913]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 28 01:24:15.463025 waagent[1913]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 28 01:24:15.463025 waagent[1913]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 28 01:24:15.463025 waagent[1913]: 2: eth0 inet6 fe80::7eed:8dff:fe7a:2afc/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 28 01:24:15.574483 waagent[1913]: 2026-01-28T01:24:15.573839Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 28 01:24:15.574483 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:24:15.574483 waagent[1913]: pkts bytes target prot opt in out source destination Jan 28 01:24:15.574483 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:24:15.574483 waagent[1913]: pkts bytes target prot opt in out source destination Jan 28 01:24:15.574483 waagent[1913]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:24:15.574483 waagent[1913]: pkts bytes target prot opt in out source destination Jan 28 01:24:15.574483 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 01:24:15.574483 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 01:24:15.574483 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 01:24:15.576922 waagent[1913]: 2026-01-28T01:24:15.576858Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 28 01:24:15.576922 waagent[1913]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:24:15.576922 waagent[1913]: pkts bytes target prot opt in out source destination Jan 28 01:24:15.576922 waagent[1913]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:24:15.576922 waagent[1913]: pkts bytes target prot opt in out source destination Jan 28 01:24:15.576922 waagent[1913]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:24:15.576922 waagent[1913]: pkts bytes target prot opt in out source destination Jan 28 01:24:15.576922 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 01:24:15.576922 waagent[1913]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 01:24:15.576922 waagent[1913]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 01:24:15.577171 waagent[1913]: 2026-01-28T01:24:15.577139Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 28 01:24:20.054626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:24:20.062641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:20.161817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:20.165381 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:24:20.277840 kubelet[2141]: E0128 01:24:20.277794 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:24:20.281272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:24:20.281562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:24:29.475358 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:24:29.476501 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:53544.service - OpenSSH per-connection server daemon (10.200.16.10:53544). Jan 28 01:24:30.049309 sshd[2149]: Accepted publickey for core from 10.200.16.10 port 53544 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:30.050594 sshd[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:30.054143 systemd-logind[1706]: New session 3 of user core. Jan 28 01:24:30.064585 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:24:30.304608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:24:30.312619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:30.411168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:30.415138 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:24:30.474804 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:37388.service - OpenSSH per-connection server daemon (10.200.16.10:37388). Jan 28 01:24:30.513811 kubelet[2160]: E0128 01:24:30.513760 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:24:30.516624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:24:30.516979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:24:30.939050 sshd[2167]: Accepted publickey for core from 10.200.16.10 port 37388 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:30.940310 sshd[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:30.944321 systemd-logind[1706]: New session 4 of user core. Jan 28 01:24:30.953580 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:24:31.273771 sshd[2167]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:31.276446 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:37388.service: Deactivated successfully. Jan 28 01:24:31.277941 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:24:31.279199 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:24:31.280153 systemd-logind[1706]: Removed session 4. Jan 28 01:24:31.361214 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:37394.service - OpenSSH per-connection server daemon (10.200.16.10:37394). Jan 28 01:24:31.844079 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 37394 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:31.845330 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:31.849713 systemd-logind[1706]: New session 5 of user core. Jan 28 01:24:31.855700 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:24:32.170937 chronyd[1697]: Selected source PHC0 Jan 28 01:24:32.190920 sshd[2175]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:32.194159 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:37394.service: Deactivated successfully. Jan 28 01:24:32.197121 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:24:32.197872 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:24:32.199870 systemd-logind[1706]: Removed session 5. Jan 28 01:24:32.275752 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:37404.service - OpenSSH per-connection server daemon (10.200.16.10:37404). Jan 28 01:24:32.723339 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 37404 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:32.724637 sshd[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:32.729062 systemd-logind[1706]: New session 6 of user core. Jan 28 01:24:32.735610 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:24:33.056557 sshd[2182]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:33.059052 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:37404.service: Deactivated successfully. Jan 28 01:24:33.060754 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:24:33.062052 systemd-logind[1706]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:24:33.063117 systemd-logind[1706]: Removed session 6. Jan 28 01:24:33.143295 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:37414.service - OpenSSH per-connection server daemon (10.200.16.10:37414). Jan 28 01:24:33.626363 sshd[2189]: Accepted publickey for core from 10.200.16.10 port 37414 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:33.627666 sshd[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:33.631639 systemd-logind[1706]: New session 7 of user core. Jan 28 01:24:33.638581 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:24:34.050935 sudo[2192]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:24:34.051221 sudo[2192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:24:34.086611 sudo[2192]: pam_unix(sudo:session): session closed for user root Jan 28 01:24:34.157636 sshd[2189]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:34.161309 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:37414.service: Deactivated successfully. Jan 28 01:24:34.162759 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:24:34.163355 systemd-logind[1706]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:24:34.164320 systemd-logind[1706]: Removed session 7. Jan 28 01:24:34.244895 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:37416.service - OpenSSH per-connection server daemon (10.200.16.10:37416). Jan 28 01:24:34.728916 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 37416 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:34.730215 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:34.734705 systemd-logind[1706]: New session 8 of user core. Jan 28 01:24:34.740609 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:24:35.004102 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:24:35.004366 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:24:35.007197 sudo[2201]: pam_unix(sudo:session): session closed for user root Jan 28 01:24:35.011429 sudo[2200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 01:24:35.011784 sudo[2200]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:24:35.028729 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 01:24:35.029817 auditctl[2204]: No rules Jan 28 01:24:35.030115 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:24:35.030270 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 01:24:35.032398 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:24:35.056859 augenrules[2222]: No rules Jan 28 01:24:35.058243 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:24:35.060356 sudo[2200]: pam_unix(sudo:session): session closed for user root Jan 28 01:24:35.131652 sshd[2197]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:35.135008 systemd-logind[1706]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:24:35.135309 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:37416.service: Deactivated successfully. Jan 28 01:24:35.136848 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:24:35.137657 systemd-logind[1706]: Removed session 8. Jan 28 01:24:35.217710 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:37432.service - OpenSSH per-connection server daemon (10.200.16.10:37432). Jan 28 01:24:35.706365 sshd[2230]: Accepted publickey for core from 10.200.16.10 port 37432 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:35.707657 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:35.712231 systemd-logind[1706]: New session 9 of user core. Jan 28 01:24:35.717601 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:24:35.980258 sudo[2233]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:24:35.980542 sudo[2233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:24:36.928667 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:24:36.928806 (dockerd)[2248]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:24:37.591178 dockerd[2248]: time="2026-01-28T01:24:37.591124999Z" level=info msg="Starting up" Jan 28 01:24:38.059954 dockerd[2248]: time="2026-01-28T01:24:38.059914479Z" level=info msg="Loading containers: start." Jan 28 01:24:38.218485 kernel: Initializing XFRM netlink socket Jan 28 01:24:38.428821 systemd-networkd[1361]: docker0: Link UP Jan 28 01:24:38.455607 dockerd[2248]: time="2026-01-28T01:24:38.455565399Z" level=info msg="Loading containers: done." Jan 28 01:24:38.466427 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3575614738-merged.mount: Deactivated successfully. Jan 28 01:24:38.477243 dockerd[2248]: time="2026-01-28T01:24:38.477201439Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:24:38.477372 dockerd[2248]: time="2026-01-28T01:24:38.477315039Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 01:24:38.477453 dockerd[2248]: time="2026-01-28T01:24:38.477433439Z" level=info msg="Daemon has completed initialization" Jan 28 01:24:38.527040 dockerd[2248]: time="2026-01-28T01:24:38.526871999Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:24:38.527153 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:24:39.300932 containerd[1726]: time="2026-01-28T01:24:39.300894399Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 01:24:40.188735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961657237.mount: Deactivated successfully. Jan 28 01:24:40.554538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:24:40.562646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:40.671340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:40.675133 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:24:40.765734 kubelet[2425]: E0128 01:24:40.765679 2425 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:24:40.767859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:24:40.767984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:24:41.792767 containerd[1726]: time="2026-01-28T01:24:41.792719031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:41.796623 containerd[1726]: time="2026-01-28T01:24:41.796593386Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 28 01:24:41.799123 containerd[1726]: time="2026-01-28T01:24:41.799090062Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:41.804018 containerd[1726]: time="2026-01-28T01:24:41.803522536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:41.804854 containerd[1726]: time="2026-01-28T01:24:41.804823094Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.503887775s" Jan 28 01:24:41.804972 containerd[1726]: time="2026-01-28T01:24:41.804949094Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 28 01:24:41.805981 containerd[1726]: time="2026-01-28T01:24:41.805948053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 01:24:43.413485 containerd[1726]: time="2026-01-28T01:24:43.413406731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:43.416667 containerd[1726]: time="2026-01-28T01:24:43.416638247Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 28 01:24:43.419955 containerd[1726]: time="2026-01-28T01:24:43.419912922Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:43.427414 containerd[1726]: time="2026-01-28T01:24:43.426223794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:43.427414 containerd[1726]: time="2026-01-28T01:24:43.427209112Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.621146819s" Jan 28 01:24:43.427414 containerd[1726]: time="2026-01-28T01:24:43.427235992Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 28 01:24:43.427818 containerd[1726]: time="2026-01-28T01:24:43.427793951Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 01:24:44.631002 containerd[1726]: time="2026-01-28T01:24:44.630955463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:44.634012 containerd[1726]: time="2026-01-28T01:24:44.633818260Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 28 01:24:44.637086 containerd[1726]: time="2026-01-28T01:24:44.637063695Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:44.645669 containerd[1726]: time="2026-01-28T01:24:44.645592163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:44.646850 containerd[1726]: time="2026-01-28T01:24:44.646721402Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.218832811s" Jan 28 01:24:44.646850 containerd[1726]: time="2026-01-28T01:24:44.646754162Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 28 01:24:44.647403 containerd[1726]: time="2026-01-28T01:24:44.647254361Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 01:24:45.778499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969537457.mount: Deactivated successfully. Jan 28 01:24:46.098724 containerd[1726]: time="2026-01-28T01:24:46.098678613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:46.102403 containerd[1726]: time="2026-01-28T01:24:46.102379568Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 28 01:24:46.106073 containerd[1726]: time="2026-01-28T01:24:46.106034243Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:46.110524 containerd[1726]: time="2026-01-28T01:24:46.110488717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:46.111156 containerd[1726]: time="2026-01-28T01:24:46.111039036Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.463756755s" Jan 28 01:24:46.111156 containerd[1726]: time="2026-01-28T01:24:46.111070476Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 28 01:24:46.111604 containerd[1726]: time="2026-01-28T01:24:46.111577315Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 01:24:46.744370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932222684.mount: Deactivated successfully. Jan 28 01:24:48.405519 containerd[1726]: time="2026-01-28T01:24:48.404883191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:48.407317 containerd[1726]: time="2026-01-28T01:24:48.407267269Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 28 01:24:48.410168 containerd[1726]: time="2026-01-28T01:24:48.410127907Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:48.414636 containerd[1726]: time="2026-01-28T01:24:48.414590864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:48.416264 containerd[1726]: time="2026-01-28T01:24:48.415934143Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.304263508s" Jan 28 01:24:48.416264 containerd[1726]: time="2026-01-28T01:24:48.415968703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 28 01:24:48.417143 containerd[1726]: time="2026-01-28T01:24:48.417105662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:24:48.961917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752310241.mount: Deactivated successfully. Jan 28 01:24:48.980633 containerd[1726]: time="2026-01-28T01:24:48.980587368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:48.991772 containerd[1726]: time="2026-01-28T01:24:48.991716120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 28 01:24:48.998336 containerd[1726]: time="2026-01-28T01:24:48.998279915Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:49.003688 containerd[1726]: time="2026-01-28T01:24:49.003643591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:49.004687 containerd[1726]: time="2026-01-28T01:24:49.004295231Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 587.061809ms" Jan 28 01:24:49.004687 containerd[1726]: time="2026-01-28T01:24:49.004326111Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 28 01:24:49.005164 containerd[1726]: time="2026-01-28T01:24:49.004950710Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 01:24:49.623286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253884130.mount: Deactivated successfully. Jan 28 01:24:50.804592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:24:50.812631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:51.086820 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 28 01:24:52.188757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:52.192570 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:24:52.224341 kubelet[2584]: E0128 01:24:52.224276 2584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:24:52.226802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:24:54.346732 update_engine[1708]: I20260128 01:24:53.454588 1708 update_attempter.cc:509] Updating boot flags... Jan 28 01:24:52.226947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:24:54.426516 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2601) Jan 28 01:24:54.728553 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2594) Jan 28 01:24:54.836616 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2594) Jan 28 01:24:56.252703 containerd[1726]: time="2026-01-28T01:24:56.252655509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:56.255086 containerd[1726]: time="2026-01-28T01:24:56.255043588Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 28 01:24:56.302514 containerd[1726]: time="2026-01-28T01:24:56.302466033Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:56.349227 containerd[1726]: time="2026-01-28T01:24:56.349180558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:56.350222 containerd[1726]: time="2026-01-28T01:24:56.350009438Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 7.344977808s" Jan 28 01:24:56.350222 containerd[1726]: time="2026-01-28T01:24:56.350039478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 28 01:25:00.368861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:25:00.380779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:25:00.409635 systemd[1]: Reloading requested from client PID 2715 ('systemctl') (unit session-9.scope)... Jan 28 01:25:00.409781 systemd[1]: Reloading... Jan 28 01:25:00.516640 zram_generator::config[2755]: No configuration found. Jan 28 01:25:00.618250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:25:00.694840 systemd[1]: Reloading finished in 284 ms. Jan 28 01:25:00.735085 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:25:00.735302 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:25:00.735662 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:25:00.739696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:25:08.613998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:25:08.620381 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:25:08.968190 kubelet[2822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:25:08.968871 kubelet[2822]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:25:08.968871 kubelet[2822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:25:08.968871 kubelet[2822]: I0128 01:25:08.968690 2822 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:25:09.858548 kubelet[2822]: I0128 01:25:09.858510 2822 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:25:09.858548 kubelet[2822]: I0128 01:25:09.858541 2822 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:25:09.858823 kubelet[2822]: I0128 01:25:09.858806 2822 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:25:09.880788 kubelet[2822]: E0128 01:25:09.880747 2822 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:09.883487 kubelet[2822]: I0128 01:25:09.882786 2822 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:25:09.887241 kubelet[2822]: E0128 01:25:09.887212 2822 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:25:09.887241 kubelet[2822]: I0128 01:25:09.887240 2822 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:25:09.889697 kubelet[2822]: I0128 01:25:09.889682 2822 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:25:09.890452 kubelet[2822]: I0128 01:25:09.890417 2822 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:25:09.890625 kubelet[2822]: I0128 01:25:09.890453 2822 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-20d4350ff0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:25:09.890717 kubelet[2822]: I0128 01:25:09.890635 2822 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:25:09.890717 kubelet[2822]: I0128 01:25:09.890644 2822 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:25:09.890815 kubelet[2822]: I0128 01:25:09.890799 2822 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:25:09.893521 kubelet[2822]: I0128 01:25:09.893504 2822 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:25:09.893555 kubelet[2822]: I0128 01:25:09.893527 2822 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:25:09.893555 kubelet[2822]: I0128 01:25:09.893544 2822 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:25:09.893555 kubelet[2822]: I0128 01:25:09.893554 2822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:25:09.897478 kubelet[2822]: W0128 01:25:09.896725 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jan 28 01:25:09.897478 kubelet[2822]: E0128 01:25:09.896782 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:09.897478 kubelet[2822]: W0128 01:25:09.896841 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-20d4350ff0&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jan 28 01:25:09.897478 kubelet[2822]: E0128 01:25:09.896867 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-20d4350ff0&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:09.897478 kubelet[2822]: I0128 01:25:09.896937 2822 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:25:09.897883 kubelet[2822]: I0128 01:25:09.897863 2822 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:25:09.897966 kubelet[2822]: W0128 01:25:09.897922 2822 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:25:09.898740 kubelet[2822]: I0128 01:25:09.898716 2822 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:25:09.898795 kubelet[2822]: I0128 01:25:09.898750 2822 server.go:1287] "Started kubelet" Jan 28 01:25:09.905011 kubelet[2822]: E0128 01:25:09.904880 2822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-20d4350ff0.188ec0a2ff01e5a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-20d4350ff0,UID:ci-4081.3.6-n-20d4350ff0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-20d4350ff0,},FirstTimestamp:2026-01-28 01:25:09.898732965 +0000 UTC m=+1.275370432,LastTimestamp:2026-01-28 01:25:09.898732965 +0000 UTC m=+1.275370432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-20d4350ff0,}" Jan 28 01:25:09.907605 kubelet[2822]: I0128 01:25:09.907577 2822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:25:09.910840 kubelet[2822]: I0128 01:25:09.910806 2822 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:25:09.919583 kubelet[2822]: I0128 01:25:09.919551 2822 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:25:09.921491 kubelet[2822]: I0128 01:25:09.919757 2822 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:25:09.921491 kubelet[2822]: E0128 01:25:09.919765 2822 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" Jan 28 01:25:09.921491 kubelet[2822]: I0128 01:25:09.920788 2822 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:25:09.921719 kubelet[2822]: I0128 01:25:09.921696 2822 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:25:09.921761 kubelet[2822]: I0128 01:25:09.921750 2822 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:25:09.922029 kubelet[2822]: I0128 01:25:09.921980 2822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:25:09.922294 kubelet[2822]: I0128 01:25:09.922275 2822 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:25:09.923432 kubelet[2822]: E0128 01:25:09.923402 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-20d4350ff0?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Jan 28 01:25:09.923672 kubelet[2822]: I0128 01:25:09.923654 2822 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:25:09.923822 kubelet[2822]: I0128 01:25:09.923804 2822 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:25:09.924168 kubelet[2822]: E0128 01:25:09.924152 2822 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:25:09.925096 kubelet[2822]: W0128 01:25:09.925057 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jan 28 01:25:09.925218 kubelet[2822]: E0128 01:25:09.925196 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:09.925758 kubelet[2822]: I0128 01:25:09.925743 2822 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:25:09.933165 kubelet[2822]: I0128 01:25:09.933119 2822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:25:09.933993 kubelet[2822]: I0128 01:25:09.933968 2822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:25:09.933993 kubelet[2822]: I0128 01:25:09.933987 2822 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:25:09.934077 kubelet[2822]: I0128 01:25:09.934009 2822 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:25:09.934077 kubelet[2822]: I0128 01:25:09.934017 2822 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:25:09.934077 kubelet[2822]: E0128 01:25:09.934052 2822 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:25:09.940340 kubelet[2822]: W0128 01:25:09.940279 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jan 28 01:25:09.940340 kubelet[2822]: E0128 01:25:09.940331 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:10.020678 kubelet[2822]: E0128 01:25:10.020598 2822 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" Jan 28 01:25:10.022136 kubelet[2822]: I0128 01:25:10.022060 2822 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:25:10.022136 kubelet[2822]: I0128 01:25:10.022133 2822 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:25:10.022227 kubelet[2822]: I0128 01:25:10.022168 2822 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:25:10.028149 kubelet[2822]: I0128 01:25:10.028125 2822 policy_none.go:49] "None policy: Start" Jan 28 01:25:10.028149 kubelet[2822]: I0128 01:25:10.028148 2822 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:25:10.028256 kubelet[2822]: I0128 01:25:10.028164 2822 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:25:10.034210 kubelet[2822]: E0128 01:25:10.034194 2822 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:25:10.036157 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:25:10.050020 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:25:10.053099 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:25:10.060342 kubelet[2822]: I0128 01:25:10.060320 2822 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:25:10.060665 kubelet[2822]: I0128 01:25:10.060647 2822 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:25:10.061036 kubelet[2822]: I0128 01:25:10.060723 2822 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:25:10.061036 kubelet[2822]: I0128 01:25:10.060950 2822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:25:10.062453 kubelet[2822]: E0128 01:25:10.062431 2822 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:25:10.062544 kubelet[2822]: E0128 01:25:10.062482 2822 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-20d4350ff0\" not found" Jan 28 01:25:10.124763 kubelet[2822]: E0128 01:25:10.124669 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-20d4350ff0?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Jan 28 01:25:10.163185 kubelet[2822]: I0128 01:25:10.163150 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.163527 kubelet[2822]: E0128 01:25:10.163503 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.244589 systemd[1]: Created slice kubepods-burstable-pod37f75a2318e0e5ee0e8bd42564acad50.slice - libcontainer container kubepods-burstable-pod37f75a2318e0e5ee0e8bd42564acad50.slice. Jan 28 01:25:10.253172 kubelet[2822]: E0128 01:25:10.253151 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.256778 systemd[1]: Created slice kubepods-burstable-pod3fd0a7680c4b8b655e5b1113c0aa8924.slice - libcontainer container kubepods-burstable-pod3fd0a7680c4b8b655e5b1113c0aa8924.slice. Jan 28 01:25:10.259104 kubelet[2822]: E0128 01:25:10.258698 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.261182 systemd[1]: Created slice kubepods-burstable-pod789d310236829e9caabbd49a28e3e66e.slice - libcontainer container kubepods-burstable-pod789d310236829e9caabbd49a28e3e66e.slice. Jan 28 01:25:10.262842 kubelet[2822]: E0128 01:25:10.262824 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.322915 kubelet[2822]: I0128 01:25:10.322878 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.323231 kubelet[2822]: I0128 01:25:10.323073 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.323231 kubelet[2822]: I0128 01:25:10.323095 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37f75a2318e0e5ee0e8bd42564acad50-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-20d4350ff0\" (UID: \"37f75a2318e0e5ee0e8bd42564acad50\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.323231 kubelet[2822]: I0128 01:25:10.323111 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.323231 kubelet[2822]: I0128 01:25:10.323129 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.323231 kubelet[2822]: I0128 01:25:10.323149 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.323370 kubelet[2822]: I0128 01:25:10.323166 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/789d310236829e9caabbd49a28e3e66e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-20d4350ff0\" (UID: \"789d310236829e9caabbd49a28e3e66e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.323370 kubelet[2822]: I0128 01:25:10.323180 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37f75a2318e0e5ee0e8bd42564acad50-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-20d4350ff0\" (UID: \"37f75a2318e0e5ee0e8bd42564acad50\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.323370 kubelet[2822]: I0128 01:25:10.323197 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37f75a2318e0e5ee0e8bd42564acad50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-20d4350ff0\" (UID: \"37f75a2318e0e5ee0e8bd42564acad50\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.365301 kubelet[2822]: I0128 01:25:10.365280 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.365658 kubelet[2822]: E0128 01:25:10.365628 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.526058 kubelet[2822]: E0128 01:25:10.525953 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-20d4350ff0?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Jan 28 01:25:10.554953 containerd[1726]: time="2026-01-28T01:25:10.554849067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-20d4350ff0,Uid:37f75a2318e0e5ee0e8bd42564acad50,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:10.559903 containerd[1726]: time="2026-01-28T01:25:10.559692023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-20d4350ff0,Uid:3fd0a7680c4b8b655e5b1113c0aa8924,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:10.564643 containerd[1726]: time="2026-01-28T01:25:10.564441260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-20d4350ff0,Uid:789d310236829e9caabbd49a28e3e66e,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:10.737503 kubelet[2822]: W0128 01:25:10.737409 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jan 28 01:25:10.737503 kubelet[2822]: E0128 01:25:10.737453 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:10.768484 kubelet[2822]: I0128 01:25:10.768154 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.768484 kubelet[2822]: E0128 01:25:10.768448 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:10.902792 kubelet[2822]: W0128 01:25:10.902705 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-20d4350ff0&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jan 28 01:25:10.902792 kubelet[2822]: E0128 01:25:10.902767 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-20d4350ff0&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:11.131150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2060710292.mount: Deactivated successfully. Jan 28 01:25:11.153555 containerd[1726]: time="2026-01-28T01:25:11.153203813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:25:11.156365 containerd[1726]: time="2026-01-28T01:25:11.155663611Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:25:11.158243 containerd[1726]: time="2026-01-28T01:25:11.158084569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 28 01:25:11.161585 containerd[1726]: time="2026-01-28T01:25:11.160874607Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:25:11.163911 containerd[1726]: time="2026-01-28T01:25:11.163875444Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:25:11.166111 containerd[1726]: time="2026-01-28T01:25:11.166078083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:25:11.168603 containerd[1726]: time="2026-01-28T01:25:11.168551361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:25:11.171962 containerd[1726]: time="2026-01-28T01:25:11.171916038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:25:11.173163 containerd[1726]: time="2026-01-28T01:25:11.172709958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 617.787451ms" Jan 28 01:25:11.176641 containerd[1726]: time="2026-01-28T01:25:11.176606035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 616.849932ms" Jan 28 01:25:11.182078 containerd[1726]: time="2026-01-28T01:25:11.181938951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 617.427571ms" Jan 28 01:25:11.326776 kubelet[2822]: E0128 01:25:11.326729 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-20d4350ff0?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Jan 28 01:25:11.437726 kubelet[2822]: W0128 01:25:11.437563 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jan 28 01:25:11.437726 kubelet[2822]: E0128 01:25:11.437627 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:11.470346 kubelet[2822]: W0128 01:25:11.470212 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jan 28 01:25:11.470346 kubelet[2822]: E0128 01:25:11.470258 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:11.572713 kubelet[2822]: I0128 01:25:11.572644 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:11.573214 kubelet[2822]: E0128 01:25:11.573193 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:11.792384 containerd[1726]: time="2026-01-28T01:25:11.792021847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:25:11.792384 containerd[1726]: time="2026-01-28T01:25:11.792068127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:25:11.792384 containerd[1726]: time="2026-01-28T01:25:11.792092567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:11.792384 containerd[1726]: time="2026-01-28T01:25:11.792168967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:11.796365 containerd[1726]: time="2026-01-28T01:25:11.796211604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:25:11.796365 containerd[1726]: time="2026-01-28T01:25:11.796299564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:25:11.796365 containerd[1726]: time="2026-01-28T01:25:11.796318284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:11.796762 containerd[1726]: time="2026-01-28T01:25:11.796692524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:11.801717 containerd[1726]: time="2026-01-28T01:25:11.800958161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:25:11.801861 containerd[1726]: time="2026-01-28T01:25:11.801617880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:25:11.802012 containerd[1726]: time="2026-01-28T01:25:11.801850320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:11.802265 containerd[1726]: time="2026-01-28T01:25:11.802226480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:11.822619 systemd[1]: Started cri-containerd-7b054538c434ef5d9cd8ebf2631f3722ebe85f99c5fd708ef88ac91e65d3fcb6.scope - libcontainer container 7b054538c434ef5d9cd8ebf2631f3722ebe85f99c5fd708ef88ac91e65d3fcb6. Jan 28 01:25:11.827237 systemd[1]: Started cri-containerd-001ec16843cab67e8d90e36e3e40f8285bb455304cfaff6c34f4813f2feefe59.scope - libcontainer container 001ec16843cab67e8d90e36e3e40f8285bb455304cfaff6c34f4813f2feefe59. Jan 28 01:25:11.828867 systemd[1]: Started cri-containerd-95d69d0fe125e4b8d88278aa0102b28d4dc0013f3782823ab387c66255fbcb24.scope - libcontainer container 95d69d0fe125e4b8d88278aa0102b28d4dc0013f3782823ab387c66255fbcb24. Jan 28 01:25:11.875485 containerd[1726]: time="2026-01-28T01:25:11.874545105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-20d4350ff0,Uid:37f75a2318e0e5ee0e8bd42564acad50,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b054538c434ef5d9cd8ebf2631f3722ebe85f99c5fd708ef88ac91e65d3fcb6\"" Jan 28 01:25:11.875485 containerd[1726]: time="2026-01-28T01:25:11.874686745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-20d4350ff0,Uid:789d310236829e9caabbd49a28e3e66e,Namespace:kube-system,Attempt:0,} returns sandbox id \"001ec16843cab67e8d90e36e3e40f8285bb455304cfaff6c34f4813f2feefe59\"" Jan 28 01:25:11.880705 containerd[1726]: time="2026-01-28T01:25:11.880666620Z" level=info msg="CreateContainer within sandbox \"001ec16843cab67e8d90e36e3e40f8285bb455304cfaff6c34f4813f2feefe59\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:25:11.880899 containerd[1726]: time="2026-01-28T01:25:11.880802540Z" level=info msg="CreateContainer within sandbox \"7b054538c434ef5d9cd8ebf2631f3722ebe85f99c5fd708ef88ac91e65d3fcb6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:25:11.888236 containerd[1726]: time="2026-01-28T01:25:11.887793375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-20d4350ff0,Uid:3fd0a7680c4b8b655e5b1113c0aa8924,Namespace:kube-system,Attempt:0,} returns sandbox id \"95d69d0fe125e4b8d88278aa0102b28d4dc0013f3782823ab387c66255fbcb24\"" Jan 28 01:25:11.890362 containerd[1726]: time="2026-01-28T01:25:11.890329853Z" level=info msg="CreateContainer within sandbox \"95d69d0fe125e4b8d88278aa0102b28d4dc0013f3782823ab387c66255fbcb24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:25:11.934738 containerd[1726]: time="2026-01-28T01:25:11.934690219Z" level=info msg="CreateContainer within sandbox \"7b054538c434ef5d9cd8ebf2631f3722ebe85f99c5fd708ef88ac91e65d3fcb6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d5a0382072631303b42615ca516b2f7caf5b5f1f1c5cc39c6bb58034342326aa\"" Jan 28 01:25:11.935300 containerd[1726]: time="2026-01-28T01:25:11.935276699Z" level=info msg="StartContainer for \"d5a0382072631303b42615ca516b2f7caf5b5f1f1c5cc39c6bb58034342326aa\"" Jan 28 01:25:11.946953 containerd[1726]: time="2026-01-28T01:25:11.946902650Z" level=info msg="CreateContainer within sandbox \"95d69d0fe125e4b8d88278aa0102b28d4dc0013f3782823ab387c66255fbcb24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"59100fb51d212cc65020a5459bd8011f7d509907755995088aa02b813541520d\"" Jan 28 01:25:11.948141 containerd[1726]: time="2026-01-28T01:25:11.947487849Z" level=info msg="StartContainer for \"59100fb51d212cc65020a5459bd8011f7d509907755995088aa02b813541520d\"" Jan 28 01:25:11.951133 containerd[1726]: time="2026-01-28T01:25:11.951093567Z" level=info msg="CreateContainer within sandbox \"001ec16843cab67e8d90e36e3e40f8285bb455304cfaff6c34f4813f2feefe59\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6c8ebe463a2814d193ebc78583c51c1b16e1c493fc3ad578c5a5baba7924f112\"" Jan 28 01:25:11.951559 containerd[1726]: time="2026-01-28T01:25:11.951533966Z" level=info msg="StartContainer for \"6c8ebe463a2814d193ebc78583c51c1b16e1c493fc3ad578c5a5baba7924f112\"" Jan 28 01:25:11.962832 systemd[1]: Started cri-containerd-d5a0382072631303b42615ca516b2f7caf5b5f1f1c5cc39c6bb58034342326aa.scope - libcontainer container d5a0382072631303b42615ca516b2f7caf5b5f1f1c5cc39c6bb58034342326aa. Jan 28 01:25:11.990651 systemd[1]: Started cri-containerd-6c8ebe463a2814d193ebc78583c51c1b16e1c493fc3ad578c5a5baba7924f112.scope - libcontainer container 6c8ebe463a2814d193ebc78583c51c1b16e1c493fc3ad578c5a5baba7924f112. Jan 28 01:25:11.997001 systemd[1]: Started cri-containerd-59100fb51d212cc65020a5459bd8011f7d509907755995088aa02b813541520d.scope - libcontainer container 59100fb51d212cc65020a5459bd8011f7d509907755995088aa02b813541520d. Jan 28 01:25:12.015913 containerd[1726]: time="2026-01-28T01:25:12.015713157Z" level=info msg="StartContainer for \"d5a0382072631303b42615ca516b2f7caf5b5f1f1c5cc39c6bb58034342326aa\" returns successfully" Jan 28 01:25:12.056714 kubelet[2822]: E0128 01:25:12.056691 2822 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:25:12.063319 containerd[1726]: time="2026-01-28T01:25:12.063270201Z" level=info msg="StartContainer for \"59100fb51d212cc65020a5459bd8011f7d509907755995088aa02b813541520d\" returns successfully" Jan 28 01:25:12.063732 containerd[1726]: time="2026-01-28T01:25:12.063363721Z" level=info msg="StartContainer for \"6c8ebe463a2814d193ebc78583c51c1b16e1c493fc3ad578c5a5baba7924f112\" returns successfully" Jan 28 01:25:12.955836 kubelet[2822]: E0128 01:25:12.955810 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:12.959654 kubelet[2822]: E0128 01:25:12.957347 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:12.961089 kubelet[2822]: E0128 01:25:12.961072 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:13.175063 kubelet[2822]: I0128 01:25:13.175005 2822 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:13.963615 kubelet[2822]: E0128 01:25:13.963202 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:13.963615 kubelet[2822]: E0128 01:25:13.963269 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:13.963615 kubelet[2822]: E0128 01:25:13.963485 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.119585 kubelet[2822]: E0128 01:25:14.119544 2822 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-20d4350ff0\" not found" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.153183 kubelet[2822]: I0128 01:25:14.153147 2822 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.221832 kubelet[2822]: I0128 01:25:14.221413 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.236775 kubelet[2822]: E0128 01:25:14.236531 2822 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-20d4350ff0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.236775 kubelet[2822]: I0128 01:25:14.236567 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.238358 kubelet[2822]: E0128 01:25:14.238336 2822 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.238560 kubelet[2822]: I0128 01:25:14.238453 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.241122 kubelet[2822]: E0128 01:25:14.241074 2822 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-20d4350ff0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.899103 kubelet[2822]: I0128 01:25:14.898894 2822 apiserver.go:52] "Watching apiserver" Jan 28 01:25:14.922227 kubelet[2822]: I0128 01:25:14.922194 2822 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:25:14.963555 kubelet[2822]: I0128 01:25:14.963254 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.963555 kubelet[2822]: I0128 01:25:14.963351 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.964194 kubelet[2822]: I0128 01:25:14.963984 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.967800 kubelet[2822]: E0128 01:25:14.967623 2822 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-20d4350ff0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.968432 kubelet[2822]: E0128 01:25:14.968315 2822 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-20d4350ff0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:14.969917 kubelet[2822]: E0128 01:25:14.969797 2822 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:15.964884 kubelet[2822]: I0128 01:25:15.964783 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:15.974094 kubelet[2822]: W0128 01:25:15.974062 2822 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 01:25:16.384682 systemd[1]: Reloading requested from client PID 3094 ('systemctl') (unit session-9.scope)... Jan 28 01:25:16.384694 systemd[1]: Reloading... Jan 28 01:25:16.473488 zram_generator::config[3137]: No configuration found. Jan 28 01:25:16.569137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:25:16.668303 systemd[1]: Reloading finished in 283 ms. Jan 28 01:25:16.712216 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:25:16.720060 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:25:16.720238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:25:16.720281 systemd[1]: kubelet.service: Consumed 1.301s CPU time, 128.3M memory peak, 0B memory swap peak. Jan 28 01:25:16.725802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:25:16.821499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:25:16.833789 (kubelet)[3198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:25:16.871344 kubelet[3198]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:25:16.871344 kubelet[3198]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:25:16.871344 kubelet[3198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:25:16.871703 kubelet[3198]: I0128 01:25:16.871399 3198 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:25:16.878758 kubelet[3198]: I0128 01:25:16.878300 3198 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:25:16.878758 kubelet[3198]: I0128 01:25:16.878343 3198 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:25:16.879059 kubelet[3198]: I0128 01:25:16.879036 3198 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:25:16.880497 kubelet[3198]: I0128 01:25:16.880480 3198 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 01:25:16.883010 kubelet[3198]: I0128 01:25:16.882992 3198 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:25:16.886488 kubelet[3198]: E0128 01:25:16.886047 3198 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:25:16.886488 kubelet[3198]: I0128 01:25:16.886074 3198 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:25:16.888673 kubelet[3198]: I0128 01:25:16.888653 3198 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:25:16.888843 kubelet[3198]: I0128 01:25:16.888819 3198 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:25:16.888995 kubelet[3198]: I0128 01:25:16.888843 3198 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-20d4350ff0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:25:16.889081 kubelet[3198]: I0128 01:25:16.889003 3198 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:25:16.889081 kubelet[3198]: I0128 01:25:16.889012 3198 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:25:16.889081 kubelet[3198]: I0128 01:25:16.889051 3198 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:25:16.889235 kubelet[3198]: I0128 01:25:16.889152 3198 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:25:16.889235 kubelet[3198]: I0128 01:25:16.889164 3198 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:25:16.889235 kubelet[3198]: I0128 01:25:16.889180 3198 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:25:16.889235 kubelet[3198]: I0128 01:25:16.889189 3198 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:25:16.891494 kubelet[3198]: I0128 01:25:16.889974 3198 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:25:16.891494 kubelet[3198]: I0128 01:25:16.890391 3198 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:25:16.891494 kubelet[3198]: I0128 01:25:16.890786 3198 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:25:16.891494 kubelet[3198]: I0128 01:25:16.890814 3198 server.go:1287] "Started kubelet" Jan 28 01:25:16.893096 kubelet[3198]: I0128 01:25:16.893074 3198 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:25:16.895983 kubelet[3198]: I0128 01:25:16.895947 3198 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:25:16.899078 kubelet[3198]: I0128 01:25:16.899024 3198 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:25:16.899369 kubelet[3198]: I0128 01:25:16.899355 3198 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:25:16.899681 kubelet[3198]: I0128 01:25:16.899664 3198 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:25:16.901144 kubelet[3198]: I0128 01:25:16.901127 3198 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:25:16.901412 kubelet[3198]: E0128 01:25:16.901394 3198 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-20d4350ff0\" not found" Jan 28 01:25:16.902781 kubelet[3198]: I0128 01:25:16.902752 3198 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:25:16.905966 kubelet[3198]: I0128 01:25:16.905947 3198 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:25:16.906272 kubelet[3198]: I0128 01:25:16.906261 3198 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:25:16.936642 kubelet[3198]: I0128 01:25:16.934707 3198 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:25:16.937223 kubelet[3198]: I0128 01:25:16.936877 3198 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:25:16.940747 kubelet[3198]: I0128 01:25:16.940724 3198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:25:16.941664 kubelet[3198]: I0128 01:25:16.941647 3198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:25:16.941757 kubelet[3198]: I0128 01:25:16.941747 3198 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:25:16.941823 kubelet[3198]: I0128 01:25:16.941814 3198 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:25:16.941870 kubelet[3198]: I0128 01:25:16.941862 3198 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:25:16.941951 kubelet[3198]: E0128 01:25:16.941937 3198 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:25:16.953534 kubelet[3198]: I0128 01:25:16.953489 3198 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:25:17.006285 kubelet[3198]: I0128 01:25:17.006232 3198 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:25:17.006538 kubelet[3198]: I0128 01:25:17.006522 3198 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:25:17.006719 kubelet[3198]: I0128 01:25:17.006595 3198 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:25:17.007129 kubelet[3198]: I0128 01:25:17.007101 3198 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:25:17.007596 kubelet[3198]: I0128 01:25:17.007214 3198 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:25:17.007596 kubelet[3198]: I0128 01:25:17.007243 3198 policy_none.go:49] "None policy: Start" Jan 28 01:25:17.007596 kubelet[3198]: I0128 01:25:17.007255 3198 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:25:17.007596 kubelet[3198]: I0128 01:25:17.007280 3198 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:25:17.007596 kubelet[3198]: I0128 01:25:17.007419 3198 state_mem.go:75] "Updated machine memory state" Jan 28 01:25:17.013351 kubelet[3198]: I0128 01:25:17.013334 3198 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:25:17.013581 kubelet[3198]: I0128 01:25:17.013567 3198 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:25:17.013674 kubelet[3198]: I0128 01:25:17.013644 3198 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:25:17.014237 kubelet[3198]: I0128 01:25:17.014222 3198 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:25:17.017172 kubelet[3198]: E0128 01:25:17.017040 3198 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:25:17.042756 kubelet[3198]: I0128 01:25:17.042723 3198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.043720 kubelet[3198]: I0128 01:25:17.043029 3198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.043720 kubelet[3198]: I0128 01:25:17.043178 3198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.057256 kubelet[3198]: W0128 01:25:17.057014 3198 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 01:25:17.061779 kubelet[3198]: W0128 01:25:17.061654 3198 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 01:25:17.061779 kubelet[3198]: W0128 01:25:17.061668 3198 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 01:25:17.061779 kubelet[3198]: E0128 01:25:17.061714 3198 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-20d4350ff0\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.110117 kubelet[3198]: I0128 01:25:17.110078 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37f75a2318e0e5ee0e8bd42564acad50-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-20d4350ff0\" (UID: \"37f75a2318e0e5ee0e8bd42564acad50\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.110117 kubelet[3198]: I0128 01:25:17.110118 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.248391 kubelet[3198]: I0128 01:25:17.110140 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.248391 kubelet[3198]: I0128 01:25:17.110158 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.248391 kubelet[3198]: I0128 01:25:17.110177 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37f75a2318e0e5ee0e8bd42564acad50-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-20d4350ff0\" (UID: \"37f75a2318e0e5ee0e8bd42564acad50\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.248391 kubelet[3198]: I0128 01:25:17.110191 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.248391 kubelet[3198]: I0128 01:25:17.110210 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3fd0a7680c4b8b655e5b1113c0aa8924-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-20d4350ff0\" (UID: \"3fd0a7680c4b8b655e5b1113c0aa8924\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.248532 kubelet[3198]: I0128 01:25:17.110225 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/789d310236829e9caabbd49a28e3e66e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-20d4350ff0\" (UID: \"789d310236829e9caabbd49a28e3e66e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.248532 kubelet[3198]: I0128 01:25:17.110240 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37f75a2318e0e5ee0e8bd42564acad50-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-20d4350ff0\" (UID: \"37f75a2318e0e5ee0e8bd42564acad50\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.248532 kubelet[3198]: I0128 01:25:17.126572 3198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.248532 kubelet[3198]: I0128 01:25:17.143629 3198 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.250292 kubelet[3198]: I0128 01:25:17.249736 3198 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-20d4350ff0" Jan 28 01:25:17.900178 kubelet[3198]: I0128 01:25:17.899934 3198 apiserver.go:52] "Watching apiserver" Jan 28 01:25:17.906784 kubelet[3198]: I0128 01:25:17.906754 3198 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:25:17.996969 kubelet[3198]: I0128 01:25:17.996905 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-20d4350ff0" podStartSLOduration=2.996885729 podStartE2EDuration="2.996885729s" podCreationTimestamp="2026-01-28 01:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:25:17.996514769 +0000 UTC m=+1.159655911" watchObservedRunningTime="2026-01-28 01:25:17.996885729 +0000 UTC m=+1.160026831" Jan 28 01:25:18.023018 kubelet[3198]: I0128 01:25:18.022937 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-20d4350ff0" podStartSLOduration=1.022919508 podStartE2EDuration="1.022919508s" podCreationTimestamp="2026-01-28 01:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:25:18.010896237 +0000 UTC m=+1.174037419" watchObservedRunningTime="2026-01-28 01:25:18.022919508 +0000 UTC m=+1.186060650" Jan 28 01:25:18.039647 kubelet[3198]: I0128 01:25:18.039579 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-20d4350ff0" podStartSLOduration=1.039560214 podStartE2EDuration="1.039560214s" podCreationTimestamp="2026-01-28 01:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:25:18.023281587 +0000 UTC m=+1.186422729" watchObservedRunningTime="2026-01-28 01:25:18.039560214 +0000 UTC m=+1.202701356" Jan 28 01:25:21.576667 kubelet[3198]: I0128 01:25:21.576593 3198 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:25:21.577370 kubelet[3198]: I0128 01:25:21.577242 3198 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:25:21.577401 containerd[1726]: time="2026-01-28T01:25:21.577086582Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:25:22.343535 kubelet[3198]: I0128 01:25:22.343493 3198 status_manager.go:890] "Failed to get status for pod" podUID="9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4" pod="kube-system/kube-proxy-fzj2v" err="pods \"kube-proxy-fzj2v\" is forbidden: User \"system:node:ci-4081.3.6-n-20d4350ff0\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.6-n-20d4350ff0' and this object" Jan 28 01:25:22.343670 kubelet[3198]: W0128 01:25:22.343569 3198 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.6-n-20d4350ff0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.6-n-20d4350ff0' and this object Jan 28 01:25:22.343670 kubelet[3198]: E0128 01:25:22.343594 3198 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081.3.6-n-20d4350ff0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.6-n-20d4350ff0' and this object" logger="UnhandledError" Jan 28 01:25:22.343670 kubelet[3198]: W0128 01:25:22.343629 3198 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-n-20d4350ff0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.6-n-20d4350ff0' and this object Jan 28 01:25:22.343670 kubelet[3198]: E0128 01:25:22.343639 3198 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-n-20d4350ff0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.6-n-20d4350ff0' and this object" logger="UnhandledError" Jan 28 01:25:22.346566 systemd[1]: Created slice kubepods-besteffort-pod9cd4d5b7_ec00_4cb8_b541_5e9a1ca70db4.slice - libcontainer container kubepods-besteffort-pod9cd4d5b7_ec00_4cb8_b541_5e9a1ca70db4.slice. Jan 28 01:25:22.439026 kubelet[3198]: I0128 01:25:22.438994 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4-lib-modules\") pod \"kube-proxy-fzj2v\" (UID: \"9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4\") " pod="kube-system/kube-proxy-fzj2v" Jan 28 01:25:22.439205 kubelet[3198]: I0128 01:25:22.439032 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkfb\" (UniqueName: \"kubernetes.io/projected/9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4-kube-api-access-dbkfb\") pod \"kube-proxy-fzj2v\" (UID: \"9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4\") " pod="kube-system/kube-proxy-fzj2v" Jan 28 01:25:22.439205 kubelet[3198]: I0128 01:25:22.439059 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4-kube-proxy\") pod \"kube-proxy-fzj2v\" (UID: \"9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4\") " pod="kube-system/kube-proxy-fzj2v" Jan 28 01:25:22.439205 kubelet[3198]: I0128 01:25:22.439075 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4-xtables-lock\") pod \"kube-proxy-fzj2v\" (UID: \"9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4\") " pod="kube-system/kube-proxy-fzj2v" Jan 28 01:25:22.707046 systemd[1]: Created slice kubepods-besteffort-pod28c0ac2c_5bf0_47d6_81bf_aafb093ecdbd.slice - libcontainer container kubepods-besteffort-pod28c0ac2c_5bf0_47d6_81bf_aafb093ecdbd.slice. Jan 28 01:25:22.740819 kubelet[3198]: I0128 01:25:22.740716 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/28c0ac2c-5bf0-47d6-81bf-aafb093ecdbd-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5zgvq\" (UID: \"28c0ac2c-5bf0-47d6-81bf-aafb093ecdbd\") " pod="tigera-operator/tigera-operator-7dcd859c48-5zgvq" Jan 28 01:25:22.740819 kubelet[3198]: I0128 01:25:22.740785 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj582\" (UniqueName: \"kubernetes.io/projected/28c0ac2c-5bf0-47d6-81bf-aafb093ecdbd-kube-api-access-nj582\") pod \"tigera-operator-7dcd859c48-5zgvq\" (UID: \"28c0ac2c-5bf0-47d6-81bf-aafb093ecdbd\") " pod="tigera-operator/tigera-operator-7dcd859c48-5zgvq" Jan 28 01:25:23.013226 containerd[1726]: time="2026-01-28T01:25:23.013129278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5zgvq,Uid:28c0ac2c-5bf0-47d6-81bf-aafb093ecdbd,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:25:23.046188 containerd[1726]: time="2026-01-28T01:25:23.046038533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:25:23.046188 containerd[1726]: time="2026-01-28T01:25:23.046166053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:25:23.046485 containerd[1726]: time="2026-01-28T01:25:23.046184973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:23.046485 containerd[1726]: time="2026-01-28T01:25:23.046280693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:23.065611 systemd[1]: Started cri-containerd-33ce3420cb3fd4c0f9b83fe94319a7bce100f2324b07bf53f3aeefd8f9f8b14f.scope - libcontainer container 33ce3420cb3fd4c0f9b83fe94319a7bce100f2324b07bf53f3aeefd8f9f8b14f. Jan 28 01:25:23.092952 containerd[1726]: time="2026-01-28T01:25:23.092821497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5zgvq,Uid:28c0ac2c-5bf0-47d6-81bf-aafb093ecdbd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"33ce3420cb3fd4c0f9b83fe94319a7bce100f2324b07bf53f3aeefd8f9f8b14f\"" Jan 28 01:25:23.095487 containerd[1726]: time="2026-01-28T01:25:23.094820935Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:25:23.554302 containerd[1726]: time="2026-01-28T01:25:23.554263582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fzj2v,Uid:9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:23.591361 containerd[1726]: time="2026-01-28T01:25:23.591285594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:25:23.591920 containerd[1726]: time="2026-01-28T01:25:23.591825913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:25:23.591983 containerd[1726]: time="2026-01-28T01:25:23.591942673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:23.592637 containerd[1726]: time="2026-01-28T01:25:23.592551753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:23.610636 systemd[1]: Started cri-containerd-4dac491999ed3cc2372e49ae57ad3d63a4c0af235cc837992530ba93636cd721.scope - libcontainer container 4dac491999ed3cc2372e49ae57ad3d63a4c0af235cc837992530ba93636cd721. Jan 28 01:25:23.630146 containerd[1726]: time="2026-01-28T01:25:23.630108444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fzj2v,Uid:9cd4d5b7-ec00-4cb8-b541-5e9a1ca70db4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dac491999ed3cc2372e49ae57ad3d63a4c0af235cc837992530ba93636cd721\"" Jan 28 01:25:23.634502 containerd[1726]: time="2026-01-28T01:25:23.634465641Z" level=info msg="CreateContainer within sandbox \"4dac491999ed3cc2372e49ae57ad3d63a4c0af235cc837992530ba93636cd721\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:25:23.674611 containerd[1726]: time="2026-01-28T01:25:23.674550370Z" level=info msg="CreateContainer within sandbox \"4dac491999ed3cc2372e49ae57ad3d63a4c0af235cc837992530ba93636cd721\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"535d1923598fafede69467a3ea5df382fa0996a27db3cb7e1e25187acb87bc3d\"" Jan 28 01:25:23.675415 containerd[1726]: time="2026-01-28T01:25:23.675376889Z" level=info msg="StartContainer for \"535d1923598fafede69467a3ea5df382fa0996a27db3cb7e1e25187acb87bc3d\"" Jan 28 01:25:23.703590 systemd[1]: Started cri-containerd-535d1923598fafede69467a3ea5df382fa0996a27db3cb7e1e25187acb87bc3d.scope - libcontainer container 535d1923598fafede69467a3ea5df382fa0996a27db3cb7e1e25187acb87bc3d. Jan 28 01:25:23.731245 containerd[1726]: time="2026-01-28T01:25:23.731180006Z" level=info msg="StartContainer for \"535d1923598fafede69467a3ea5df382fa0996a27db3cb7e1e25187acb87bc3d\" returns successfully" Jan 28 01:25:24.790650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542956690.mount: Deactivated successfully. Jan 28 01:25:25.159438 kubelet[3198]: I0128 01:25:25.159365 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fzj2v" podStartSLOduration=3.159347549 podStartE2EDuration="3.159347549s" podCreationTimestamp="2026-01-28 01:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:25:24.010777431 +0000 UTC m=+7.173918653" watchObservedRunningTime="2026-01-28 01:25:25.159347549 +0000 UTC m=+8.322488731" Jan 28 01:25:25.262480 containerd[1726]: time="2026-01-28T01:25:25.262377910Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:25.264572 containerd[1726]: time="2026-01-28T01:25:25.264430428Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 28 01:25:25.267191 containerd[1726]: time="2026-01-28T01:25:25.267169706Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:25.273086 containerd[1726]: time="2026-01-28T01:25:25.272145182Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:25.273086 containerd[1726]: time="2026-01-28T01:25:25.272975381Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.178123406s" Jan 28 01:25:25.273086 containerd[1726]: time="2026-01-28T01:25:25.273002301Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 28 01:25:25.275259 containerd[1726]: time="2026-01-28T01:25:25.275066100Z" level=info msg="CreateContainer within sandbox \"33ce3420cb3fd4c0f9b83fe94319a7bce100f2324b07bf53f3aeefd8f9f8b14f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:25:25.305221 containerd[1726]: time="2026-01-28T01:25:25.305175077Z" level=info msg="CreateContainer within sandbox \"33ce3420cb3fd4c0f9b83fe94319a7bce100f2324b07bf53f3aeefd8f9f8b14f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2213e8b8e93096466ab1b201fccdf7dc9334b1405619345282755aac86bcba19\"" Jan 28 01:25:25.305854 containerd[1726]: time="2026-01-28T01:25:25.305708556Z" level=info msg="StartContainer for \"2213e8b8e93096466ab1b201fccdf7dc9334b1405619345282755aac86bcba19\"" Jan 28 01:25:25.332617 systemd[1]: Started cri-containerd-2213e8b8e93096466ab1b201fccdf7dc9334b1405619345282755aac86bcba19.scope - libcontainer container 2213e8b8e93096466ab1b201fccdf7dc9334b1405619345282755aac86bcba19. Jan 28 01:25:25.355792 containerd[1726]: time="2026-01-28T01:25:25.355557878Z" level=info msg="StartContainer for \"2213e8b8e93096466ab1b201fccdf7dc9334b1405619345282755aac86bcba19\" returns successfully" Jan 28 01:25:26.015348 kubelet[3198]: I0128 01:25:26.015191 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5zgvq" podStartSLOduration=1.835483446 podStartE2EDuration="4.015173131s" podCreationTimestamp="2026-01-28 01:25:22 +0000 UTC" firstStartedPulling="2026-01-28 01:25:23.094198696 +0000 UTC m=+6.257339838" lastFinishedPulling="2026-01-28 01:25:25.273888381 +0000 UTC m=+8.437029523" observedRunningTime="2026-01-28 01:25:26.015021931 +0000 UTC m=+9.178163033" watchObservedRunningTime="2026-01-28 01:25:26.015173131 +0000 UTC m=+9.178314273" Jan 28 01:25:31.285666 sudo[2233]: pam_unix(sudo:session): session closed for user root Jan 28 01:25:31.366794 sshd[2230]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:31.373677 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:37432.service: Deactivated successfully. Jan 28 01:25:31.379122 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:25:31.379344 systemd[1]: session-9.scope: Consumed 5.305s CPU time, 154.5M memory peak, 0B memory swap peak. Jan 28 01:25:31.379952 systemd-logind[1706]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:25:31.381091 systemd-logind[1706]: Removed session 9. Jan 28 01:25:40.844559 systemd[1]: Created slice kubepods-besteffort-pod7cc5126b_1c95_4bc8_a17f_ea3219f2b4e5.slice - libcontainer container kubepods-besteffort-pod7cc5126b_1c95_4bc8_a17f_ea3219f2b4e5.slice. Jan 28 01:25:40.848963 kubelet[3198]: I0128 01:25:40.848910 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cc5126b-1c95-4bc8-a17f-ea3219f2b4e5-tigera-ca-bundle\") pod \"calico-typha-5964dff54f-cl6h4\" (UID: \"7cc5126b-1c95-4bc8-a17f-ea3219f2b4e5\") " pod="calico-system/calico-typha-5964dff54f-cl6h4" Jan 28 01:25:40.848963 kubelet[3198]: I0128 01:25:40.848953 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7cc5126b-1c95-4bc8-a17f-ea3219f2b4e5-typha-certs\") pod \"calico-typha-5964dff54f-cl6h4\" (UID: \"7cc5126b-1c95-4bc8-a17f-ea3219f2b4e5\") " pod="calico-system/calico-typha-5964dff54f-cl6h4" Jan 28 01:25:40.849686 kubelet[3198]: I0128 01:25:40.848973 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb57f\" (UniqueName: \"kubernetes.io/projected/7cc5126b-1c95-4bc8-a17f-ea3219f2b4e5-kube-api-access-mb57f\") pod \"calico-typha-5964dff54f-cl6h4\" (UID: \"7cc5126b-1c95-4bc8-a17f-ea3219f2b4e5\") " pod="calico-system/calico-typha-5964dff54f-cl6h4" Jan 28 01:25:41.076173 systemd[1]: Created slice kubepods-besteffort-podc46470c2_c980_4f78_a4d3_83c5fac05ab9.slice - libcontainer container kubepods-besteffort-podc46470c2_c980_4f78_a4d3_83c5fac05ab9.slice. Jan 28 01:25:41.149887 containerd[1726]: time="2026-01-28T01:25:41.149144287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5964dff54f-cl6h4,Uid:7cc5126b-1c95-4bc8-a17f-ea3219f2b4e5,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:41.151814 kubelet[3198]: I0128 01:25:41.151789 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c46470c2-c980-4f78-a4d3-83c5fac05ab9-node-certs\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.151956 kubelet[3198]: I0128 01:25:41.151823 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c46470c2-c980-4f78-a4d3-83c5fac05ab9-policysync\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.151956 kubelet[3198]: I0128 01:25:41.151840 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfzz4\" (UniqueName: \"kubernetes.io/projected/c46470c2-c980-4f78-a4d3-83c5fac05ab9-kube-api-access-rfzz4\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.152016 kubelet[3198]: I0128 01:25:41.151956 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c46470c2-c980-4f78-a4d3-83c5fac05ab9-cni-bin-dir\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.152016 kubelet[3198]: I0128 01:25:41.151975 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46470c2-c980-4f78-a4d3-83c5fac05ab9-tigera-ca-bundle\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.152016 kubelet[3198]: I0128 01:25:41.151995 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c46470c2-c980-4f78-a4d3-83c5fac05ab9-var-run-calico\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.152085 kubelet[3198]: I0128 01:25:41.152012 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c46470c2-c980-4f78-a4d3-83c5fac05ab9-cni-log-dir\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.152085 kubelet[3198]: I0128 01:25:41.152041 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c46470c2-c980-4f78-a4d3-83c5fac05ab9-cni-net-dir\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.152085 kubelet[3198]: I0128 01:25:41.152057 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c46470c2-c980-4f78-a4d3-83c5fac05ab9-lib-modules\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.152085 kubelet[3198]: I0128 01:25:41.152074 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c46470c2-c980-4f78-a4d3-83c5fac05ab9-var-lib-calico\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.152165 kubelet[3198]: I0128 01:25:41.152089 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c46470c2-c980-4f78-a4d3-83c5fac05ab9-flexvol-driver-host\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.152165 kubelet[3198]: I0128 01:25:41.152117 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c46470c2-c980-4f78-a4d3-83c5fac05ab9-xtables-lock\") pod \"calico-node-2k28l\" (UID: \"c46470c2-c980-4f78-a4d3-83c5fac05ab9\") " pod="calico-system/calico-node-2k28l" Jan 28 01:25:41.190364 containerd[1726]: time="2026-01-28T01:25:41.189994575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:25:41.190364 containerd[1726]: time="2026-01-28T01:25:41.190054935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:25:41.190364 containerd[1726]: time="2026-01-28T01:25:41.190073575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:41.190364 containerd[1726]: time="2026-01-28T01:25:41.190147855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:41.211632 systemd[1]: Started cri-containerd-9b07c84bfc9c2adcb156aa318395ea9167d133bee0cbdd54332aafbdf2d38080.scope - libcontainer container 9b07c84bfc9c2adcb156aa318395ea9167d133bee0cbdd54332aafbdf2d38080. Jan 28 01:25:41.255224 kubelet[3198]: E0128 01:25:41.254883 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.255224 kubelet[3198]: W0128 01:25:41.254913 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.255224 kubelet[3198]: E0128 01:25:41.254935 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.255224 kubelet[3198]: E0128 01:25:41.255072 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.255224 kubelet[3198]: W0128 01:25:41.255079 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.255224 kubelet[3198]: E0128 01:25:41.255087 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.255443 kubelet[3198]: E0128 01:25:41.255290 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.255443 kubelet[3198]: W0128 01:25:41.255298 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.255443 kubelet[3198]: E0128 01:25:41.255307 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.256797 kubelet[3198]: E0128 01:25:41.256660 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.256797 kubelet[3198]: W0128 01:25:41.256681 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.256797 kubelet[3198]: E0128 01:25:41.256696 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.257383 kubelet[3198]: E0128 01:25:41.257244 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.257383 kubelet[3198]: W0128 01:25:41.257259 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.257383 kubelet[3198]: E0128 01:25:41.257274 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.257983 kubelet[3198]: E0128 01:25:41.257814 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.257983 kubelet[3198]: W0128 01:25:41.257929 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.257983 kubelet[3198]: E0128 01:25:41.257950 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.258603 kubelet[3198]: E0128 01:25:41.258399 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.258603 kubelet[3198]: W0128 01:25:41.258415 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.258603 kubelet[3198]: E0128 01:25:41.258537 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.259354 kubelet[3198]: E0128 01:25:41.259153 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.259354 kubelet[3198]: W0128 01:25:41.259174 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.259617 kubelet[3198]: E0128 01:25:41.259496 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.260225 kubelet[3198]: E0128 01:25:41.260200 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.260225 kubelet[3198]: W0128 01:25:41.260222 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.260359 kubelet[3198]: E0128 01:25:41.260327 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.260898 kubelet[3198]: E0128 01:25:41.260846 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.260958 containerd[1726]: time="2026-01-28T01:25:41.260851440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5964dff54f-cl6h4,Uid:7cc5126b-1c95-4bc8-a17f-ea3219f2b4e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b07c84bfc9c2adcb156aa318395ea9167d133bee0cbdd54332aafbdf2d38080\"" Jan 28 01:25:41.261105 kubelet[3198]: W0128 01:25:41.260970 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.261105 kubelet[3198]: E0128 01:25:41.261028 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.261620 kubelet[3198]: E0128 01:25:41.261596 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.262239 kubelet[3198]: W0128 01:25:41.261615 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.262546 kubelet[3198]: E0128 01:25:41.262431 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.262727 kubelet[3198]: E0128 01:25:41.262709 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.262727 kubelet[3198]: W0128 01:25:41.262723 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.263208 kubelet[3198]: E0128 01:25:41.263140 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.263622 kubelet[3198]: E0128 01:25:41.263596 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.263622 kubelet[3198]: W0128 01:25:41.263614 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.264367 kubelet[3198]: E0128 01:25:41.264236 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.264814 kubelet[3198]: E0128 01:25:41.264794 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.264814 kubelet[3198]: W0128 01:25:41.264811 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.265563 kubelet[3198]: E0128 01:25:41.265542 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.265563 kubelet[3198]: W0128 01:25:41.265558 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.265986 kubelet[3198]: E0128 01:25:41.265747 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.265986 kubelet[3198]: E0128 01:25:41.265771 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.266436 containerd[1726]: time="2026-01-28T01:25:41.266224716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:25:41.266836 kubelet[3198]: E0128 01:25:41.266813 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.266836 kubelet[3198]: W0128 01:25:41.266829 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.268202 kubelet[3198]: E0128 01:25:41.268165 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.268202 kubelet[3198]: W0128 01:25:41.268189 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.269834 kubelet[3198]: E0128 01:25:41.269810 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.269834 kubelet[3198]: W0128 01:25:41.269827 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.270220 kubelet[3198]: E0128 01:25:41.270186 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:25:41.270379 kubelet[3198]: E0128 01:25:41.270362 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.270379 kubelet[3198]: E0128 01:25:41.270380 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.270452 kubelet[3198]: E0128 01:25:41.270388 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.271009 kubelet[3198]: E0128 01:25:41.270987 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.271009 kubelet[3198]: W0128 01:25:41.271003 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.271126 kubelet[3198]: E0128 01:25:41.271096 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.271335 kubelet[3198]: E0128 01:25:41.271299 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.271335 kubelet[3198]: W0128 01:25:41.271315 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.271528 kubelet[3198]: E0128 01:25:41.271413 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.271955 kubelet[3198]: E0128 01:25:41.271814 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.271955 kubelet[3198]: W0128 01:25:41.271830 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.271955 kubelet[3198]: E0128 01:25:41.271876 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.272256 kubelet[3198]: E0128 01:25:41.272231 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.272256 kubelet[3198]: W0128 01:25:41.272252 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.272558 kubelet[3198]: E0128 01:25:41.272302 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.272796 kubelet[3198]: E0128 01:25:41.272770 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.272796 kubelet[3198]: W0128 01:25:41.272792 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.273143 kubelet[3198]: E0128 01:25:41.272870 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.273143 kubelet[3198]: E0128 01:25:41.273072 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.273143 kubelet[3198]: W0128 01:25:41.273085 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.273318 kubelet[3198]: E0128 01:25:41.273269 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.273785 kubelet[3198]: E0128 01:25:41.273536 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.273785 kubelet[3198]: W0128 01:25:41.273784 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.274050 kubelet[3198]: E0128 01:25:41.273879 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.275272 kubelet[3198]: E0128 01:25:41.275249 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.275272 kubelet[3198]: W0128 01:25:41.275268 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.275861 kubelet[3198]: E0128 01:25:41.275832 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.276258 kubelet[3198]: E0128 01:25:41.276234 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.276258 kubelet[3198]: W0128 01:25:41.276253 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.276484 kubelet[3198]: E0128 01:25:41.276381 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.277065 kubelet[3198]: E0128 01:25:41.276946 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.277065 kubelet[3198]: W0128 01:25:41.276968 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.277188 kubelet[3198]: E0128 01:25:41.277172 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.278347 kubelet[3198]: E0128 01:25:41.278280 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.278347 kubelet[3198]: W0128 01:25:41.278300 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.278773 kubelet[3198]: E0128 01:25:41.278534 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.279021 kubelet[3198]: E0128 01:25:41.278996 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.279021 kubelet[3198]: W0128 01:25:41.279017 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.279021 kubelet[3198]: E0128 01:25:41.279044 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.281082 kubelet[3198]: E0128 01:25:41.280616 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.281082 kubelet[3198]: W0128 01:25:41.280640 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.281082 kubelet[3198]: E0128 01:25:41.280839 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.281082 kubelet[3198]: W0128 01:25:41.280848 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.281210 kubelet[3198]: E0128 01:25:41.281123 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.281210 kubelet[3198]: W0128 01:25:41.281135 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.281336 kubelet[3198]: E0128 01:25:41.281317 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.281336 kubelet[3198]: W0128 01:25:41.281330 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.281531 kubelet[3198]: E0128 01:25:41.281342 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.281531 kubelet[3198]: E0128 01:25:41.281361 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.281531 kubelet[3198]: E0128 01:25:41.281372 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.284283 kubelet[3198]: E0128 01:25:41.281783 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.299977 kubelet[3198]: E0128 01:25:41.299897 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.299977 kubelet[3198]: W0128 01:25:41.299919 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.299977 kubelet[3198]: E0128 01:25:41.299938 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.343855 kubelet[3198]: E0128 01:25:41.343672 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.343855 kubelet[3198]: W0128 01:25:41.343707 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.343855 kubelet[3198]: E0128 01:25:41.343730 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.344629 kubelet[3198]: E0128 01:25:41.344303 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.344629 kubelet[3198]: W0128 01:25:41.344316 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.344629 kubelet[3198]: E0128 01:25:41.344488 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.345071 kubelet[3198]: E0128 01:25:41.344909 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.345071 kubelet[3198]: W0128 01:25:41.344932 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.345071 kubelet[3198]: E0128 01:25:41.344944 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.345328 kubelet[3198]: E0128 01:25:41.345224 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.345328 kubelet[3198]: W0128 01:25:41.345235 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.345328 kubelet[3198]: E0128 01:25:41.345246 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.345613 kubelet[3198]: E0128 01:25:41.345527 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.345613 kubelet[3198]: W0128 01:25:41.345538 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.345613 kubelet[3198]: E0128 01:25:41.345557 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.345965 kubelet[3198]: E0128 01:25:41.345901 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.345965 kubelet[3198]: W0128 01:25:41.345911 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.345965 kubelet[3198]: E0128 01:25:41.345921 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.346281 kubelet[3198]: E0128 01:25:41.346194 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.346281 kubelet[3198]: W0128 01:25:41.346205 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.346281 kubelet[3198]: E0128 01:25:41.346225 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.346617 kubelet[3198]: E0128 01:25:41.346504 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.346617 kubelet[3198]: W0128 01:25:41.346515 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.346617 kubelet[3198]: E0128 01:25:41.346549 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.346938 kubelet[3198]: E0128 01:25:41.346847 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.346938 kubelet[3198]: W0128 01:25:41.346860 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.346938 kubelet[3198]: E0128 01:25:41.346878 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.347262 kubelet[3198]: E0128 01:25:41.347149 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.347262 kubelet[3198]: W0128 01:25:41.347164 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.347262 kubelet[3198]: E0128 01:25:41.347194 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.347811 kubelet[3198]: E0128 01:25:41.347684 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.347811 kubelet[3198]: W0128 01:25:41.347696 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.347811 kubelet[3198]: E0128 01:25:41.347707 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.348425 kubelet[3198]: E0128 01:25:41.348236 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.348425 kubelet[3198]: W0128 01:25:41.348249 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.348425 kubelet[3198]: E0128 01:25:41.348359 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.349035 kubelet[3198]: E0128 01:25:41.348818 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.349035 kubelet[3198]: W0128 01:25:41.348839 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.349035 kubelet[3198]: E0128 01:25:41.348850 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.349419 kubelet[3198]: E0128 01:25:41.349253 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.349419 kubelet[3198]: W0128 01:25:41.349265 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.349419 kubelet[3198]: E0128 01:25:41.349276 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.349907 kubelet[3198]: E0128 01:25:41.349635 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.349907 kubelet[3198]: W0128 01:25:41.349648 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.349907 kubelet[3198]: E0128 01:25:41.349658 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.350267 kubelet[3198]: E0128 01:25:41.350253 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.350515 kubelet[3198]: W0128 01:25:41.350304 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.350515 kubelet[3198]: E0128 01:25:41.350317 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.350790 kubelet[3198]: E0128 01:25:41.350775 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.350947 kubelet[3198]: W0128 01:25:41.350863 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.350947 kubelet[3198]: E0128 01:25:41.350879 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.351129 kubelet[3198]: E0128 01:25:41.351084 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.351263 kubelet[3198]: W0128 01:25:41.351177 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.351263 kubelet[3198]: E0128 01:25:41.351192 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.351468 kubelet[3198]: E0128 01:25:41.351437 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.351636 kubelet[3198]: W0128 01:25:41.351536 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.351636 kubelet[3198]: E0128 01:25:41.351553 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.352040 kubelet[3198]: E0128 01:25:41.351947 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.352040 kubelet[3198]: W0128 01:25:41.351960 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.352040 kubelet[3198]: E0128 01:25:41.351971 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.354053 kubelet[3198]: E0128 01:25:41.354033 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.354053 kubelet[3198]: W0128 01:25:41.354050 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.354179 kubelet[3198]: E0128 01:25:41.354062 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.354179 kubelet[3198]: I0128 01:25:41.354091 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ef4dca-fc9b-48e6-a83b-e247508a0b04-kubelet-dir\") pod \"csi-node-driver-kwqqh\" (UID: \"b0ef4dca-fc9b-48e6-a83b-e247508a0b04\") " pod="calico-system/csi-node-driver-kwqqh" Jan 28 01:25:41.354485 kubelet[3198]: E0128 01:25:41.354256 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.354485 kubelet[3198]: W0128 01:25:41.354265 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.354485 kubelet[3198]: E0128 01:25:41.354275 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.354485 kubelet[3198]: I0128 01:25:41.354288 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b0ef4dca-fc9b-48e6-a83b-e247508a0b04-varrun\") pod \"csi-node-driver-kwqqh\" (UID: \"b0ef4dca-fc9b-48e6-a83b-e247508a0b04\") " pod="calico-system/csi-node-driver-kwqqh" Jan 28 01:25:41.354742 kubelet[3198]: E0128 01:25:41.354717 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.354742 kubelet[3198]: W0128 01:25:41.354736 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.354930 kubelet[3198]: E0128 01:25:41.354754 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.354930 kubelet[3198]: I0128 01:25:41.354771 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b0ef4dca-fc9b-48e6-a83b-e247508a0b04-registration-dir\") pod \"csi-node-driver-kwqqh\" (UID: \"b0ef4dca-fc9b-48e6-a83b-e247508a0b04\") " pod="calico-system/csi-node-driver-kwqqh" Jan 28 01:25:41.355109 kubelet[3198]: E0128 01:25:41.355075 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.355109 kubelet[3198]: W0128 01:25:41.355088 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.355246 kubelet[3198]: E0128 01:25:41.355199 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.355564 kubelet[3198]: E0128 01:25:41.355505 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.355564 kubelet[3198]: W0128 01:25:41.355520 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.355564 kubelet[3198]: E0128 01:25:41.355537 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.355801 kubelet[3198]: E0128 01:25:41.355752 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.355801 kubelet[3198]: W0128 01:25:41.355767 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.355801 kubelet[3198]: E0128 01:25:41.355784 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.356083 kubelet[3198]: E0128 01:25:41.356066 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.356177 kubelet[3198]: W0128 01:25:41.356082 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.356177 kubelet[3198]: E0128 01:25:41.356141 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.356254 kubelet[3198]: E0128 01:25:41.356242 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.356372 kubelet[3198]: W0128 01:25:41.356254 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.356372 kubelet[3198]: E0128 01:25:41.356295 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.356372 kubelet[3198]: I0128 01:25:41.356324 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqzmg\" (UniqueName: \"kubernetes.io/projected/b0ef4dca-fc9b-48e6-a83b-e247508a0b04-kube-api-access-zqzmg\") pod \"csi-node-driver-kwqqh\" (UID: \"b0ef4dca-fc9b-48e6-a83b-e247508a0b04\") " pod="calico-system/csi-node-driver-kwqqh" Jan 28 01:25:41.357065 kubelet[3198]: E0128 01:25:41.357022 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.357065 kubelet[3198]: W0128 01:25:41.357043 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.357065 kubelet[3198]: E0128 01:25:41.357065 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.357839 kubelet[3198]: E0128 01:25:41.357623 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.357839 kubelet[3198]: W0128 01:25:41.357639 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.357839 kubelet[3198]: E0128 01:25:41.357651 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.358240 kubelet[3198]: E0128 01:25:41.358224 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.358240 kubelet[3198]: W0128 01:25:41.358240 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.358316 kubelet[3198]: E0128 01:25:41.358255 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.358316 kubelet[3198]: I0128 01:25:41.358272 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b0ef4dca-fc9b-48e6-a83b-e247508a0b04-socket-dir\") pod \"csi-node-driver-kwqqh\" (UID: \"b0ef4dca-fc9b-48e6-a83b-e247508a0b04\") " pod="calico-system/csi-node-driver-kwqqh" Jan 28 01:25:41.358972 kubelet[3198]: E0128 01:25:41.358953 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.358972 kubelet[3198]: W0128 01:25:41.358969 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.359108 kubelet[3198]: E0128 01:25:41.359027 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.359412 kubelet[3198]: E0128 01:25:41.359393 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.359412 kubelet[3198]: W0128 01:25:41.359410 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.359533 kubelet[3198]: E0128 01:25:41.359427 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.359812 kubelet[3198]: E0128 01:25:41.359793 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.359812 kubelet[3198]: W0128 01:25:41.359810 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.359918 kubelet[3198]: E0128 01:25:41.359823 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.360643 kubelet[3198]: E0128 01:25:41.360625 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.360771 kubelet[3198]: W0128 01:25:41.360729 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.360771 kubelet[3198]: E0128 01:25:41.360751 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.380619 containerd[1726]: time="2026-01-28T01:25:41.379785709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2k28l,Uid:c46470c2-c980-4f78-a4d3-83c5fac05ab9,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:41.433371 containerd[1726]: time="2026-01-28T01:25:41.432102948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:25:41.433371 containerd[1726]: time="2026-01-28T01:25:41.432151348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:25:41.433371 containerd[1726]: time="2026-01-28T01:25:41.432161628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:41.433371 containerd[1726]: time="2026-01-28T01:25:41.432229748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:25:41.448614 systemd[1]: Started cri-containerd-bae04dea93e4def94fa89140d0240ed9647361d5c2ece965b297fb658c209f89.scope - libcontainer container bae04dea93e4def94fa89140d0240ed9647361d5c2ece965b297fb658c209f89. Jan 28 01:25:41.459199 kubelet[3198]: E0128 01:25:41.459034 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.459199 kubelet[3198]: W0128 01:25:41.459052 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.459199 kubelet[3198]: E0128 01:25:41.459070 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.459643 kubelet[3198]: E0128 01:25:41.459385 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.459643 kubelet[3198]: W0128 01:25:41.459570 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.459643 kubelet[3198]: E0128 01:25:41.459595 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.460207 kubelet[3198]: E0128 01:25:41.460120 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.460207 kubelet[3198]: W0128 01:25:41.460137 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.460207 kubelet[3198]: E0128 01:25:41.460155 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.460480 kubelet[3198]: E0128 01:25:41.460399 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.460480 kubelet[3198]: W0128 01:25:41.460413 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.460480 kubelet[3198]: E0128 01:25:41.460429 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.460745 kubelet[3198]: E0128 01:25:41.460667 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.460745 kubelet[3198]: W0128 01:25:41.460677 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.460745 kubelet[3198]: E0128 01:25:41.460690 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.461279 kubelet[3198]: E0128 01:25:41.461262 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.461343 kubelet[3198]: W0128 01:25:41.461280 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.461343 kubelet[3198]: E0128 01:25:41.461298 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.461810 kubelet[3198]: E0128 01:25:41.461789 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.461810 kubelet[3198]: W0128 01:25:41.461806 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.461986 kubelet[3198]: E0128 01:25:41.461884 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.462579 kubelet[3198]: E0128 01:25:41.462520 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.462579 kubelet[3198]: W0128 01:25:41.462539 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.463519 kubelet[3198]: E0128 01:25:41.463090 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.463519 kubelet[3198]: W0128 01:25:41.463108 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.463519 kubelet[3198]: E0128 01:25:41.463320 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.463519 kubelet[3198]: W0128 01:25:41.463328 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.464172 kubelet[3198]: E0128 01:25:41.463682 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.464172 kubelet[3198]: W0128 01:25:41.463697 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.464172 kubelet[3198]: E0128 01:25:41.463870 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.464172 kubelet[3198]: W0128 01:25:41.463878 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.464172 kubelet[3198]: E0128 01:25:41.463932 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.465437 kubelet[3198]: E0128 01:25:41.464936 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.465437 kubelet[3198]: W0128 01:25:41.464960 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.465437 kubelet[3198]: E0128 01:25:41.464974 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.465437 kubelet[3198]: E0128 01:25:41.465248 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.465437 kubelet[3198]: W0128 01:25:41.465259 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.465437 kubelet[3198]: E0128 01:25:41.465268 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.465437 kubelet[3198]: E0128 01:25:41.465291 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.466221 kubelet[3198]: E0128 01:25:41.465634 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.466221 kubelet[3198]: W0128 01:25:41.465647 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.466221 kubelet[3198]: E0128 01:25:41.465674 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.467053 kubelet[3198]: E0128 01:25:41.466336 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.467053 kubelet[3198]: W0128 01:25:41.466357 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.467053 kubelet[3198]: E0128 01:25:41.466369 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.467053 kubelet[3198]: E0128 01:25:41.466597 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.467053 kubelet[3198]: W0128 01:25:41.466606 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.467053 kubelet[3198]: E0128 01:25:41.466616 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.467053 kubelet[3198]: E0128 01:25:41.466634 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.467053 kubelet[3198]: E0128 01:25:41.467041 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.467053 kubelet[3198]: W0128 01:25:41.467052 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.467053 kubelet[3198]: E0128 01:25:41.467063 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.467815 kubelet[3198]: E0128 01:25:41.467524 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.467815 kubelet[3198]: W0128 01:25:41.467537 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.467815 kubelet[3198]: E0128 01:25:41.467742 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.468517 kubelet[3198]: E0128 01:25:41.468257 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.468517 kubelet[3198]: W0128 01:25:41.468275 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.468517 kubelet[3198]: E0128 01:25:41.468288 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.468517 kubelet[3198]: E0128 01:25:41.468304 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.468517 kubelet[3198]: E0128 01:25:41.468303 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.468750 kubelet[3198]: E0128 01:25:41.468686 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.468750 kubelet[3198]: W0128 01:25:41.468701 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.468750 kubelet[3198]: E0128 01:25:41.468719 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.469144 kubelet[3198]: E0128 01:25:41.469125 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.469144 kubelet[3198]: W0128 01:25:41.469142 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.469235 kubelet[3198]: E0128 01:25:41.469158 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.470555 kubelet[3198]: E0128 01:25:41.470529 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.470555 kubelet[3198]: W0128 01:25:41.470553 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.470645 kubelet[3198]: E0128 01:25:41.470571 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.472171 kubelet[3198]: E0128 01:25:41.472010 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.472171 kubelet[3198]: W0128 01:25:41.472030 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.472171 kubelet[3198]: E0128 01:25:41.472045 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.472636 kubelet[3198]: E0128 01:25:41.472615 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.472759 kubelet[3198]: W0128 01:25:41.472716 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.472759 kubelet[3198]: E0128 01:25:41.472733 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:41.476515 containerd[1726]: time="2026-01-28T01:25:41.476309514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2k28l,Uid:c46470c2-c980-4f78-a4d3-83c5fac05ab9,Namespace:calico-system,Attempt:0,} returns sandbox id \"bae04dea93e4def94fa89140d0240ed9647361d5c2ece965b297fb658c209f89\"" Jan 28 01:25:41.483523 kubelet[3198]: E0128 01:25:41.483454 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:41.483723 kubelet[3198]: W0128 01:25:41.483657 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:41.483723 kubelet[3198]: E0128 01:25:41.483682 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:42.366841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432644557.mount: Deactivated successfully. Jan 28 01:25:42.943823 kubelet[3198]: E0128 01:25:42.942684 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:25:43.035731 containerd[1726]: time="2026-01-28T01:25:43.035038711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:43.037272 containerd[1726]: time="2026-01-28T01:25:43.037224949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 28 01:25:43.046973 containerd[1726]: time="2026-01-28T01:25:43.046912701Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:43.051929 containerd[1726]: time="2026-01-28T01:25:43.051628858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:43.052364 containerd[1726]: time="2026-01-28T01:25:43.052332497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.786073581s" Jan 28 01:25:43.052410 containerd[1726]: time="2026-01-28T01:25:43.052363257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 28 01:25:43.053321 containerd[1726]: time="2026-01-28T01:25:43.053294777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:25:43.067312 containerd[1726]: time="2026-01-28T01:25:43.067266086Z" level=info msg="CreateContainer within sandbox \"9b07c84bfc9c2adcb156aa318395ea9167d133bee0cbdd54332aafbdf2d38080\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:25:43.093851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505100411.mount: Deactivated successfully. Jan 28 01:25:43.108813 containerd[1726]: time="2026-01-28T01:25:43.108773534Z" level=info msg="CreateContainer within sandbox \"9b07c84bfc9c2adcb156aa318395ea9167d133bee0cbdd54332aafbdf2d38080\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aeb1dc91bda0db2f9db331733f708afd484a791a69097cf1703ffb06d08c9845\"" Jan 28 01:25:43.109263 containerd[1726]: time="2026-01-28T01:25:43.109242773Z" level=info msg="StartContainer for \"aeb1dc91bda0db2f9db331733f708afd484a791a69097cf1703ffb06d08c9845\"" Jan 28 01:25:43.140621 systemd[1]: Started cri-containerd-aeb1dc91bda0db2f9db331733f708afd484a791a69097cf1703ffb06d08c9845.scope - libcontainer container aeb1dc91bda0db2f9db331733f708afd484a791a69097cf1703ffb06d08c9845. Jan 28 01:25:43.178878 containerd[1726]: time="2026-01-28T01:25:43.178762880Z" level=info msg="StartContainer for \"aeb1dc91bda0db2f9db331733f708afd484a791a69097cf1703ffb06d08c9845\" returns successfully" Jan 28 01:25:44.052188 kubelet[3198]: I0128 01:25:44.051936 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5964dff54f-cl6h4" podStartSLOduration=2.264121266 podStartE2EDuration="4.051918046s" podCreationTimestamp="2026-01-28 01:25:40 +0000 UTC" firstStartedPulling="2026-01-28 01:25:41.265365237 +0000 UTC m=+24.428506339" lastFinishedPulling="2026-01-28 01:25:43.053161977 +0000 UTC m=+26.216303119" observedRunningTime="2026-01-28 01:25:44.051239046 +0000 UTC m=+27.214380228" watchObservedRunningTime="2026-01-28 01:25:44.051918046 +0000 UTC m=+27.215059188" Jan 28 01:25:44.068591 kubelet[3198]: E0128 01:25:44.067961 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.068591 kubelet[3198]: W0128 01:25:44.067985 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.068591 kubelet[3198]: E0128 01:25:44.068007 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.068591 kubelet[3198]: E0128 01:25:44.068324 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.068591 kubelet[3198]: W0128 01:25:44.068335 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.068591 kubelet[3198]: E0128 01:25:44.068376 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.069780 kubelet[3198]: E0128 01:25:44.068873 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.069780 kubelet[3198]: W0128 01:25:44.068884 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.069780 kubelet[3198]: E0128 01:25:44.069330 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.070101 kubelet[3198]: E0128 01:25:44.070017 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.070101 kubelet[3198]: W0128 01:25:44.070031 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.070101 kubelet[3198]: E0128 01:25:44.070042 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.070711 kubelet[3198]: E0128 01:25:44.070678 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.070711 kubelet[3198]: W0128 01:25:44.070692 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.071176 kubelet[3198]: E0128 01:25:44.070845 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.071361 kubelet[3198]: E0128 01:25:44.071348 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.071581 kubelet[3198]: W0128 01:25:44.071419 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.071581 kubelet[3198]: E0128 01:25:44.071433 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.071999 kubelet[3198]: E0128 01:25:44.071933 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.071999 kubelet[3198]: W0128 01:25:44.071948 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.072199 kubelet[3198]: E0128 01:25:44.072102 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.072440 kubelet[3198]: E0128 01:25:44.072399 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.072440 kubelet[3198]: W0128 01:25:44.072412 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.072793 kubelet[3198]: E0128 01:25:44.072678 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.072937 kubelet[3198]: E0128 01:25:44.072926 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.073390 kubelet[3198]: W0128 01:25:44.073152 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.073390 kubelet[3198]: E0128 01:25:44.073173 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.073390 kubelet[3198]: E0128 01:25:44.073369 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.073956 kubelet[3198]: W0128 01:25:44.073498 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.073956 kubelet[3198]: E0128 01:25:44.073514 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.074437 kubelet[3198]: E0128 01:25:44.074247 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.074437 kubelet[3198]: W0128 01:25:44.074312 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.074437 kubelet[3198]: E0128 01:25:44.074325 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.075337 kubelet[3198]: E0128 01:25:44.075141 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.075337 kubelet[3198]: W0128 01:25:44.075156 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.075337 kubelet[3198]: E0128 01:25:44.075167 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.075879 kubelet[3198]: E0128 01:25:44.075679 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.075879 kubelet[3198]: W0128 01:25:44.075692 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.075879 kubelet[3198]: E0128 01:25:44.075707 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.076333 kubelet[3198]: E0128 01:25:44.076229 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.076333 kubelet[3198]: W0128 01:25:44.076243 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.076333 kubelet[3198]: E0128 01:25:44.076254 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.076806 kubelet[3198]: E0128 01:25:44.076518 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.076806 kubelet[3198]: W0128 01:25:44.076530 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.076806 kubelet[3198]: E0128 01:25:44.076540 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.081058 kubelet[3198]: E0128 01:25:44.081039 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.081058 kubelet[3198]: W0128 01:25:44.081054 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.081142 kubelet[3198]: E0128 01:25:44.081067 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.081449 kubelet[3198]: E0128 01:25:44.081434 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.081449 kubelet[3198]: W0128 01:25:44.081447 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.081671 kubelet[3198]: E0128 01:25:44.081558 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.082411 kubelet[3198]: E0128 01:25:44.082390 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.082411 kubelet[3198]: W0128 01:25:44.082408 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.082513 kubelet[3198]: E0128 01:25:44.082422 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.083310 kubelet[3198]: E0128 01:25:44.083288 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.083496 kubelet[3198]: W0128 01:25:44.083479 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.083532 kubelet[3198]: E0128 01:25:44.083499 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.083960 kubelet[3198]: E0128 01:25:44.083938 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.083960 kubelet[3198]: W0128 01:25:44.083954 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.084045 kubelet[3198]: E0128 01:25:44.083967 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.084255 kubelet[3198]: E0128 01:25:44.084242 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.084255 kubelet[3198]: W0128 01:25:44.084253 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.085114 kubelet[3198]: E0128 01:25:44.084562 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.085114 kubelet[3198]: E0128 01:25:44.084584 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.085114 kubelet[3198]: W0128 01:25:44.084592 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.085114 kubelet[3198]: E0128 01:25:44.085048 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.085560 kubelet[3198]: E0128 01:25:44.085537 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.085560 kubelet[3198]: W0128 01:25:44.085555 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.085690 kubelet[3198]: E0128 01:25:44.085671 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.086147 kubelet[3198]: E0128 01:25:44.086128 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.086147 kubelet[3198]: W0128 01:25:44.086142 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.086239 kubelet[3198]: E0128 01:25:44.086222 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.086618 kubelet[3198]: E0128 01:25:44.086542 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.086618 kubelet[3198]: W0128 01:25:44.086558 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.086618 kubelet[3198]: E0128 01:25:44.086585 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.086915 kubelet[3198]: E0128 01:25:44.086892 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.086915 kubelet[3198]: W0128 01:25:44.086910 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.086915 kubelet[3198]: E0128 01:25:44.086939 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.087337 kubelet[3198]: E0128 01:25:44.087274 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.087337 kubelet[3198]: W0128 01:25:44.087289 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.087337 kubelet[3198]: E0128 01:25:44.087305 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.087794 kubelet[3198]: E0128 01:25:44.087774 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.087794 kubelet[3198]: W0128 01:25:44.087791 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.087951 kubelet[3198]: E0128 01:25:44.087806 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.088244 kubelet[3198]: E0128 01:25:44.088129 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.088244 kubelet[3198]: W0128 01:25:44.088142 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.088244 kubelet[3198]: E0128 01:25:44.088159 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.088892 kubelet[3198]: E0128 01:25:44.088597 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.088892 kubelet[3198]: W0128 01:25:44.088708 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.088892 kubelet[3198]: E0128 01:25:44.088730 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.089480 kubelet[3198]: E0128 01:25:44.089301 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.089480 kubelet[3198]: W0128 01:25:44.089316 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.089480 kubelet[3198]: E0128 01:25:44.089335 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.089796 kubelet[3198]: E0128 01:25:44.089677 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.089796 kubelet[3198]: W0128 01:25:44.089691 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.090105 kubelet[3198]: E0128 01:25:44.090068 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.090276 kubelet[3198]: E0128 01:25:44.090258 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:25:44.090276 kubelet[3198]: W0128 01:25:44.090272 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:25:44.090341 kubelet[3198]: E0128 01:25:44.090283 3198 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:25:44.138489 containerd[1726]: time="2026-01-28T01:25:44.138186059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:44.141019 containerd[1726]: time="2026-01-28T01:25:44.140992657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 28 01:25:44.143889 containerd[1726]: time="2026-01-28T01:25:44.143862655Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:44.147291 containerd[1726]: time="2026-01-28T01:25:44.147264052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:44.148133 containerd[1726]: time="2026-01-28T01:25:44.147751652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.094426795s" Jan 28 01:25:44.148133 containerd[1726]: time="2026-01-28T01:25:44.147782172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 28 01:25:44.150090 containerd[1726]: time="2026-01-28T01:25:44.150068850Z" level=info msg="CreateContainer within sandbox \"bae04dea93e4def94fa89140d0240ed9647361d5c2ece965b297fb658c209f89\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:25:44.179798 containerd[1726]: time="2026-01-28T01:25:44.179757747Z" level=info msg="CreateContainer within sandbox \"bae04dea93e4def94fa89140d0240ed9647361d5c2ece965b297fb658c209f89\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1dce9a9af334b82957d87242d94f46bc8bc68c0414c6874aff47612f75df20e3\"" Jan 28 01:25:44.181858 containerd[1726]: time="2026-01-28T01:25:44.180561626Z" level=info msg="StartContainer for \"1dce9a9af334b82957d87242d94f46bc8bc68c0414c6874aff47612f75df20e3\"" Jan 28 01:25:44.211639 systemd[1]: Started cri-containerd-1dce9a9af334b82957d87242d94f46bc8bc68c0414c6874aff47612f75df20e3.scope - libcontainer container 1dce9a9af334b82957d87242d94f46bc8bc68c0414c6874aff47612f75df20e3. Jan 28 01:25:44.239087 containerd[1726]: time="2026-01-28T01:25:44.238747501Z" level=info msg="StartContainer for \"1dce9a9af334b82957d87242d94f46bc8bc68c0414c6874aff47612f75df20e3\" returns successfully" Jan 28 01:25:44.249524 systemd[1]: cri-containerd-1dce9a9af334b82957d87242d94f46bc8bc68c0414c6874aff47612f75df20e3.scope: Deactivated successfully. Jan 28 01:25:44.942572 kubelet[3198]: E0128 01:25:44.942209 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:25:45.041633 kubelet[3198]: I0128 01:25:45.040890 3198 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 01:25:45.057331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dce9a9af334b82957d87242d94f46bc8bc68c0414c6874aff47612f75df20e3-rootfs.mount: Deactivated successfully. Jan 28 01:25:45.357553 containerd[1726]: time="2026-01-28T01:25:45.357434614Z" level=info msg="shim disconnected" id=1dce9a9af334b82957d87242d94f46bc8bc68c0414c6874aff47612f75df20e3 namespace=k8s.io Jan 28 01:25:45.357978 containerd[1726]: time="2026-01-28T01:25:45.357517094Z" level=warning msg="cleaning up after shim disconnected" id=1dce9a9af334b82957d87242d94f46bc8bc68c0414c6874aff47612f75df20e3 namespace=k8s.io Jan 28 01:25:45.357978 containerd[1726]: time="2026-01-28T01:25:45.357619814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:46.046663 containerd[1726]: time="2026-01-28T01:25:46.046630407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:25:46.943695 kubelet[3198]: E0128 01:25:46.943615 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:25:48.374124 containerd[1726]: time="2026-01-28T01:25:48.374074261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:48.378522 containerd[1726]: time="2026-01-28T01:25:48.378473336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 28 01:25:48.381304 containerd[1726]: time="2026-01-28T01:25:48.381258774Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:48.385211 containerd[1726]: time="2026-01-28T01:25:48.385020770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:48.385757 containerd[1726]: time="2026-01-28T01:25:48.385727770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.338887523s" Jan 28 01:25:48.385823 containerd[1726]: time="2026-01-28T01:25:48.385758250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 28 01:25:48.388922 containerd[1726]: time="2026-01-28T01:25:48.388881487Z" level=info msg="CreateContainer within sandbox \"bae04dea93e4def94fa89140d0240ed9647361d5c2ece965b297fb658c209f89\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:25:48.434007 containerd[1726]: time="2026-01-28T01:25:48.433946204Z" level=info msg="CreateContainer within sandbox \"bae04dea93e4def94fa89140d0240ed9647361d5c2ece965b297fb658c209f89\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"020df64943c2b134764d4ef2657b11ae3cfa2913b3f965bc203522b39266e143\"" Jan 28 01:25:48.434754 containerd[1726]: time="2026-01-28T01:25:48.434642004Z" level=info msg="StartContainer for \"020df64943c2b134764d4ef2657b11ae3cfa2913b3f965bc203522b39266e143\"" Jan 28 01:25:48.463610 systemd[1]: Started cri-containerd-020df64943c2b134764d4ef2657b11ae3cfa2913b3f965bc203522b39266e143.scope - libcontainer container 020df64943c2b134764d4ef2657b11ae3cfa2913b3f965bc203522b39266e143. Jan 28 01:25:48.489248 containerd[1726]: time="2026-01-28T01:25:48.488569353Z" level=info msg="StartContainer for \"020df64943c2b134764d4ef2657b11ae3cfa2913b3f965bc203522b39266e143\" returns successfully" Jan 28 01:25:48.942971 kubelet[3198]: E0128 01:25:48.942587 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:25:49.742957 containerd[1726]: time="2026-01-28T01:25:49.742914815Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:25:49.745918 systemd[1]: cri-containerd-020df64943c2b134764d4ef2657b11ae3cfa2913b3f965bc203522b39266e143.scope: Deactivated successfully. Jan 28 01:25:49.765095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-020df64943c2b134764d4ef2657b11ae3cfa2913b3f965bc203522b39266e143-rootfs.mount: Deactivated successfully. Jan 28 01:25:49.844910 kubelet[3198]: I0128 01:25:49.844845 3198 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:25:50.153119 kubelet[3198]: I0128 01:25:49.919160 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50b57260-757d-49ab-b412-157457a311f9-goldmane-ca-bundle\") pod \"goldmane-666569f655-mm8vq\" (UID: \"50b57260-757d-49ab-b412-157457a311f9\") " pod="calico-system/goldmane-666569f655-mm8vq" Jan 28 01:25:50.153119 kubelet[3198]: I0128 01:25:49.919216 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxwnn\" (UniqueName: \"kubernetes.io/projected/e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6-kube-api-access-cxwnn\") pod \"coredns-668d6bf9bc-fv4g9\" (UID: \"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6\") " pod="kube-system/coredns-668d6bf9bc-fv4g9" Jan 28 01:25:50.153119 kubelet[3198]: I0128 01:25:49.919236 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/79191197-2837-43fa-b284-2023c360b9e2-calico-apiserver-certs\") pod \"calico-apiserver-84868d5f79-45qm5\" (UID: \"79191197-2837-43fa-b284-2023c360b9e2\") " pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" Jan 28 01:25:50.153119 kubelet[3198]: I0128 01:25:49.919252 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhzw\" (UniqueName: \"kubernetes.io/projected/79191197-2837-43fa-b284-2023c360b9e2-kube-api-access-lmhzw\") pod \"calico-apiserver-84868d5f79-45qm5\" (UID: \"79191197-2837-43fa-b284-2023c360b9e2\") " pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" Jan 28 01:25:50.153119 kubelet[3198]: I0128 01:25:49.919515 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/50b57260-757d-49ab-b412-157457a311f9-goldmane-key-pair\") pod \"goldmane-666569f655-mm8vq\" (UID: \"50b57260-757d-49ab-b412-157457a311f9\") " pod="calico-system/goldmane-666569f655-mm8vq" Jan 28 01:25:49.893918 systemd[1]: Created slice kubepods-burstable-pode7d11ad6_ecf0_4303_8f1f_51aaa54b1ca6.slice - libcontainer container kubepods-burstable-pode7d11ad6_ecf0_4303_8f1f_51aaa54b1ca6.slice. Jan 28 01:25:50.153617 kubelet[3198]: I0128 01:25:49.919545 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m6lw\" (UniqueName: \"kubernetes.io/projected/36f60f1b-edc5-4d4b-8496-6ed810707a8c-kube-api-access-4m6lw\") pod \"coredns-668d6bf9bc-5ldbp\" (UID: \"36f60f1b-edc5-4d4b-8496-6ed810707a8c\") " pod="kube-system/coredns-668d6bf9bc-5ldbp" Jan 28 01:25:50.153617 kubelet[3198]: I0128 01:25:49.919573 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efad0924-58e2-470d-a190-d57cd8685e98-tigera-ca-bundle\") pod \"calico-kube-controllers-f6944bcdb-mk9w8\" (UID: \"efad0924-58e2-470d-a190-d57cd8685e98\") " pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" Jan 28 01:25:50.153617 kubelet[3198]: I0128 01:25:49.919594 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-whisker-backend-key-pair\") pod \"whisker-6bb97ccffd-9stf2\" (UID: \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\") " pod="calico-system/whisker-6bb97ccffd-9stf2" Jan 28 01:25:50.153617 kubelet[3198]: I0128 01:25:49.919615 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b57260-757d-49ab-b412-157457a311f9-config\") pod \"goldmane-666569f655-mm8vq\" (UID: \"50b57260-757d-49ab-b412-157457a311f9\") " pod="calico-system/goldmane-666569f655-mm8vq" Jan 28 01:25:50.153617 kubelet[3198]: I0128 01:25:49.919633 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8mp\" (UniqueName: \"kubernetes.io/projected/efad0924-58e2-470d-a190-d57cd8685e98-kube-api-access-jc8mp\") pod \"calico-kube-controllers-f6944bcdb-mk9w8\" (UID: \"efad0924-58e2-470d-a190-d57cd8685e98\") " pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" Jan 28 01:25:49.910963 systemd[1]: Created slice kubepods-besteffort-podefad0924_58e2_470d_a190_d57cd8685e98.slice - libcontainer container kubepods-besteffort-podefad0924_58e2_470d_a190_d57cd8685e98.slice. Jan 28 01:25:50.153907 kubelet[3198]: I0128 01:25:49.919649 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19367d42-7907-4f04-8c63-bcae87fa9f82-calico-apiserver-certs\") pod \"calico-apiserver-84868d5f79-sv4mj\" (UID: \"19367d42-7907-4f04-8c63-bcae87fa9f82\") " pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" Jan 28 01:25:50.153907 kubelet[3198]: I0128 01:25:49.919668 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhj9p\" (UniqueName: \"kubernetes.io/projected/19367d42-7907-4f04-8c63-bcae87fa9f82-kube-api-access-zhj9p\") pod \"calico-apiserver-84868d5f79-sv4mj\" (UID: \"19367d42-7907-4f04-8c63-bcae87fa9f82\") " pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" Jan 28 01:25:50.153907 kubelet[3198]: I0128 01:25:49.919685 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-whisker-ca-bundle\") pod \"whisker-6bb97ccffd-9stf2\" (UID: \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\") " pod="calico-system/whisker-6bb97ccffd-9stf2" Jan 28 01:25:50.153907 kubelet[3198]: I0128 01:25:49.919709 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtpzm\" (UniqueName: \"kubernetes.io/projected/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-kube-api-access-rtpzm\") pod \"whisker-6bb97ccffd-9stf2\" (UID: \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\") " pod="calico-system/whisker-6bb97ccffd-9stf2" Jan 28 01:25:50.153907 kubelet[3198]: I0128 01:25:49.919730 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36f60f1b-edc5-4d4b-8496-6ed810707a8c-config-volume\") pod \"coredns-668d6bf9bc-5ldbp\" (UID: \"36f60f1b-edc5-4d4b-8496-6ed810707a8c\") " pod="kube-system/coredns-668d6bf9bc-5ldbp" Jan 28 01:25:49.919188 systemd[1]: Created slice kubepods-besteffort-pod19367d42_7907_4f04_8c63_bcae87fa9f82.slice - libcontainer container kubepods-besteffort-pod19367d42_7907_4f04_8c63_bcae87fa9f82.slice. Jan 28 01:25:50.157517 kubelet[3198]: I0128 01:25:49.919751 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6-config-volume\") pod \"coredns-668d6bf9bc-fv4g9\" (UID: \"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6\") " pod="kube-system/coredns-668d6bf9bc-fv4g9" Jan 28 01:25:50.157517 kubelet[3198]: I0128 01:25:49.919767 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd44v\" (UniqueName: \"kubernetes.io/projected/50b57260-757d-49ab-b412-157457a311f9-kube-api-access-bd44v\") pod \"goldmane-666569f655-mm8vq\" (UID: \"50b57260-757d-49ab-b412-157457a311f9\") " pod="calico-system/goldmane-666569f655-mm8vq" Jan 28 01:25:49.927560 systemd[1]: Created slice kubepods-besteffort-pod9a2ab7a4_4311_43c2_bd8b_f735f5bee0ae.slice - libcontainer container kubepods-besteffort-pod9a2ab7a4_4311_43c2_bd8b_f735f5bee0ae.slice. Jan 28 01:25:49.934870 systemd[1]: Created slice kubepods-burstable-pod36f60f1b_edc5_4d4b_8496_6ed810707a8c.slice - libcontainer container kubepods-burstable-pod36f60f1b_edc5_4d4b_8496_6ed810707a8c.slice. Jan 28 01:25:49.940360 systemd[1]: Created slice kubepods-besteffort-pod50b57260_757d_49ab_b412_157457a311f9.slice - libcontainer container kubepods-besteffort-pod50b57260_757d_49ab_b412_157457a311f9.slice. Jan 28 01:25:49.949763 systemd[1]: Created slice kubepods-besteffort-pod79191197_2837_43fa_b284_2023c360b9e2.slice - libcontainer container kubepods-besteffort-pod79191197_2837_43fa_b284_2023c360b9e2.slice. Jan 28 01:25:50.455534 containerd[1726]: time="2026-01-28T01:25:50.455416985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fv4g9,Uid:e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:50.458850 containerd[1726]: time="2026-01-28T01:25:50.458818582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6944bcdb-mk9w8,Uid:efad0924-58e2-470d-a190-d57cd8685e98,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:50.459399 containerd[1726]: time="2026-01-28T01:25:50.459358182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mm8vq,Uid:50b57260-757d-49ab-b412-157457a311f9,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:50.464224 containerd[1726]: time="2026-01-28T01:25:50.464197057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84868d5f79-sv4mj,Uid:19367d42-7907-4f04-8c63-bcae87fa9f82,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:50.464402 containerd[1726]: time="2026-01-28T01:25:50.464379217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb97ccffd-9stf2,Uid:9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:50.467200 containerd[1726]: time="2026-01-28T01:25:50.467171734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5ldbp,Uid:36f60f1b-edc5-4d4b-8496-6ed810707a8c,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:50.486059 containerd[1726]: time="2026-01-28T01:25:50.485884757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84868d5f79-45qm5,Uid:79191197-2837-43fa-b284-2023c360b9e2,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:50.797454 kubelet[3198]: I0128 01:25:50.797025 3198 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 01:25:50.948818 systemd[1]: Created slice kubepods-besteffort-podb0ef4dca_fc9b_48e6_a83b_e247508a0b04.slice - libcontainer container kubepods-besteffort-podb0ef4dca_fc9b_48e6_a83b_e247508a0b04.slice. Jan 28 01:25:50.951439 containerd[1726]: time="2026-01-28T01:25:50.951406519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwqqh,Uid:b0ef4dca-fc9b-48e6-a83b-e247508a0b04,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:51.943554 containerd[1726]: time="2026-01-28T01:25:51.943485387Z" level=info msg="shim disconnected" id=020df64943c2b134764d4ef2657b11ae3cfa2913b3f965bc203522b39266e143 namespace=k8s.io Jan 28 01:25:51.943554 containerd[1726]: time="2026-01-28T01:25:51.943534387Z" level=warning msg="cleaning up after shim disconnected" id=020df64943c2b134764d4ef2657b11ae3cfa2913b3f965bc203522b39266e143 namespace=k8s.io Jan 28 01:25:51.943554 containerd[1726]: time="2026-01-28T01:25:51.943544347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:25:52.062157 containerd[1726]: time="2026-01-28T01:25:52.061978716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:25:54.587439 containerd[1726]: time="2026-01-28T01:25:54.587385340Z" level=error msg="Failed to destroy network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.588315 containerd[1726]: time="2026-01-28T01:25:54.588205899Z" level=error msg="encountered an error cleaning up failed sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.588315 containerd[1726]: time="2026-01-28T01:25:54.588271939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84868d5f79-sv4mj,Uid:19367d42-7907-4f04-8c63-bcae87fa9f82,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.588744 kubelet[3198]: E0128 01:25:54.588694 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.589273 kubelet[3198]: E0128 01:25:54.589044 3198 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" Jan 28 01:25:54.589273 kubelet[3198]: E0128 01:25:54.589084 3198 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" Jan 28 01:25:54.589273 kubelet[3198]: E0128 01:25:54.589129 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84868d5f79-sv4mj_calico-apiserver(19367d42-7907-4f04-8c63-bcae87fa9f82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84868d5f79-sv4mj_calico-apiserver(19367d42-7907-4f04-8c63-bcae87fa9f82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:25:54.743878 containerd[1726]: time="2026-01-28T01:25:54.743818972Z" level=error msg="Failed to destroy network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.744164 containerd[1726]: time="2026-01-28T01:25:54.744129932Z" level=error msg="encountered an error cleaning up failed sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.744249 containerd[1726]: time="2026-01-28T01:25:54.744191052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb97ccffd-9stf2,Uid:9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.744528 kubelet[3198]: E0128 01:25:54.744490 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.744578 kubelet[3198]: E0128 01:25:54.744554 3198 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bb97ccffd-9stf2" Jan 28 01:25:54.744603 kubelet[3198]: E0128 01:25:54.744575 3198 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bb97ccffd-9stf2" Jan 28 01:25:54.744657 kubelet[3198]: E0128 01:25:54.744611 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bb97ccffd-9stf2_calico-system(9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bb97ccffd-9stf2_calico-system(9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bb97ccffd-9stf2" podUID="9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae" Jan 28 01:25:54.788940 containerd[1726]: time="2026-01-28T01:25:54.788862130Z" level=error msg="Failed to destroy network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.789206 containerd[1726]: time="2026-01-28T01:25:54.789181129Z" level=error msg="encountered an error cleaning up failed sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.789253 containerd[1726]: time="2026-01-28T01:25:54.789233169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fv4g9,Uid:e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.789670 kubelet[3198]: E0128 01:25:54.789442 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.789670 kubelet[3198]: E0128 01:25:54.789506 3198 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fv4g9" Jan 28 01:25:54.789670 kubelet[3198]: E0128 01:25:54.789528 3198 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fv4g9" Jan 28 01:25:54.789798 kubelet[3198]: E0128 01:25:54.789569 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fv4g9_kube-system(e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fv4g9_kube-system(e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fv4g9" podUID="e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6" Jan 28 01:25:54.886692 containerd[1726]: time="2026-01-28T01:25:54.886538030Z" level=error msg="Failed to destroy network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.888068 containerd[1726]: time="2026-01-28T01:25:54.887667668Z" level=error msg="encountered an error cleaning up failed sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.888068 containerd[1726]: time="2026-01-28T01:25:54.887728548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6944bcdb-mk9w8,Uid:efad0924-58e2-470d-a190-d57cd8685e98,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.888213 kubelet[3198]: E0128 01:25:54.887984 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.889139 kubelet[3198]: E0128 01:25:54.888037 3198 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" Jan 28 01:25:54.889139 kubelet[3198]: E0128 01:25:54.888283 3198 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" Jan 28 01:25:54.889403 kubelet[3198]: E0128 01:25:54.889289 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f6944bcdb-mk9w8_calico-system(efad0924-58e2-470d-a190-d57cd8685e98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f6944bcdb-mk9w8_calico-system(efad0924-58e2-470d-a190-d57cd8685e98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:25:54.931157 containerd[1726]: time="2026-01-28T01:25:54.931102548Z" level=error msg="Failed to destroy network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.931431 containerd[1726]: time="2026-01-28T01:25:54.931404467Z" level=error msg="encountered an error cleaning up failed sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.931514 containerd[1726]: time="2026-01-28T01:25:54.931454307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mm8vq,Uid:50b57260-757d-49ab-b412-157457a311f9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.931715 kubelet[3198]: E0128 01:25:54.931678 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:54.931775 kubelet[3198]: E0128 01:25:54.931736 3198 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mm8vq" Jan 28 01:25:54.931775 kubelet[3198]: E0128 01:25:54.931762 3198 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mm8vq" Jan 28 01:25:54.931847 kubelet[3198]: E0128 01:25:54.931803 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mm8vq_calico-system(50b57260-757d-49ab-b412-157457a311f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mm8vq_calico-system(50b57260-757d-49ab-b412-157457a311f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:25:55.065690 kubelet[3198]: I0128 01:25:55.065299 3198 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:25:55.066208 kubelet[3198]: I0128 01:25:55.066181 3198 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:25:55.066858 containerd[1726]: time="2026-01-28T01:25:55.066706018Z" level=info msg="StopPodSandbox for \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\"" Jan 28 01:25:55.067448 containerd[1726]: time="2026-01-28T01:25:55.066724138Z" level=info msg="StopPodSandbox for \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\"" Jan 28 01:25:55.067448 containerd[1726]: time="2026-01-28T01:25:55.067193977Z" level=info msg="Ensure that sandbox 89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee in task-service has been cleanup successfully" Jan 28 01:25:55.068324 containerd[1726]: time="2026-01-28T01:25:55.067620697Z" level=info msg="Ensure that sandbox 3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185 in task-service has been cleanup successfully" Jan 28 01:25:55.071071 kubelet[3198]: I0128 01:25:55.070698 3198 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:25:55.071648 containerd[1726]: time="2026-01-28T01:25:55.071518609Z" level=info msg="StopPodSandbox for \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\"" Jan 28 01:25:55.072525 containerd[1726]: time="2026-01-28T01:25:55.072314088Z" level=info msg="Ensure that sandbox d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4 in task-service has been cleanup successfully" Jan 28 01:25:55.075327 kubelet[3198]: I0128 01:25:55.075048 3198 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:25:55.075483 containerd[1726]: time="2026-01-28T01:25:55.075443722Z" level=info msg="StopPodSandbox for \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\"" Jan 28 01:25:55.075823 containerd[1726]: time="2026-01-28T01:25:55.075795082Z" level=info msg="Ensure that sandbox 413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3 in task-service has been cleanup successfully" Jan 28 01:25:55.079779 kubelet[3198]: I0128 01:25:55.079759 3198 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:25:55.083770 containerd[1726]: time="2026-01-28T01:25:55.083737627Z" level=info msg="StopPodSandbox for \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\"" Jan 28 01:25:55.084636 containerd[1726]: time="2026-01-28T01:25:55.084415666Z" level=info msg="Ensure that sandbox d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873 in task-service has been cleanup successfully" Jan 28 01:25:55.108289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185-shm.mount: Deactivated successfully. Jan 28 01:25:55.108837 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee-shm.mount: Deactivated successfully. Jan 28 01:25:55.109002 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3-shm.mount: Deactivated successfully. Jan 28 01:25:55.151176 containerd[1726]: time="2026-01-28T01:25:55.150877023Z" level=error msg="StopPodSandbox for \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\" failed" error="failed to destroy network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.151176 containerd[1726]: time="2026-01-28T01:25:55.151032983Z" level=error msg="StopPodSandbox for \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\" failed" error="failed to destroy network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.151307 kubelet[3198]: E0128 01:25:55.151209 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:25:55.151307 kubelet[3198]: E0128 01:25:55.151266 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee"} Jan 28 01:25:55.151368 kubelet[3198]: E0128 01:25:55.151331 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:25:55.151368 kubelet[3198]: E0128 01:25:55.151350 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fv4g9" podUID="e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6" Jan 28 01:25:55.153231 kubelet[3198]: E0128 01:25:55.151451 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:25:55.153231 kubelet[3198]: E0128 01:25:55.151489 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185"} Jan 28 01:25:55.153231 kubelet[3198]: E0128 01:25:55.151507 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"efad0924-58e2-470d-a190-d57cd8685e98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:25:55.153231 kubelet[3198]: E0128 01:25:55.151521 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"efad0924-58e2-470d-a190-d57cd8685e98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:25:55.162497 containerd[1726]: time="2026-01-28T01:25:55.162407162Z" level=error msg="StopPodSandbox for \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\" failed" error="failed to destroy network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.163018 kubelet[3198]: E0128 01:25:55.162709 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:25:55.163018 kubelet[3198]: E0128 01:25:55.162755 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4"} Jan 28 01:25:55.163018 kubelet[3198]: E0128 01:25:55.162804 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:25:55.163018 kubelet[3198]: E0128 01:25:55.162828 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bb97ccffd-9stf2" podUID="9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae" Jan 28 01:25:55.163959 containerd[1726]: time="2026-01-28T01:25:55.163920879Z" level=error msg="StopPodSandbox for \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\" failed" error="failed to destroy network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.164482 kubelet[3198]: E0128 01:25:55.164172 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:25:55.164482 kubelet[3198]: E0128 01:25:55.164203 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3"} Jan 28 01:25:55.164482 kubelet[3198]: E0128 01:25:55.164225 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19367d42-7907-4f04-8c63-bcae87fa9f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:25:55.164482 kubelet[3198]: E0128 01:25:55.164244 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19367d42-7907-4f04-8c63-bcae87fa9f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:25:55.167733 containerd[1726]: time="2026-01-28T01:25:55.167526233Z" level=error msg="StopPodSandbox for \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\" failed" error="failed to destroy network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.168367 kubelet[3198]: E0128 01:25:55.167695 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:25:55.168367 kubelet[3198]: E0128 01:25:55.167821 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873"} Jan 28 01:25:55.168367 kubelet[3198]: E0128 01:25:55.167850 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50b57260-757d-49ab-b412-157457a311f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:25:55.168518 kubelet[3198]: E0128 01:25:55.167868 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50b57260-757d-49ab-b412-157457a311f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:25:55.172358 containerd[1726]: time="2026-01-28T01:25:55.172323904Z" level=error msg="Failed to destroy network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.174511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019-shm.mount: Deactivated successfully. Jan 28 01:25:55.174620 containerd[1726]: time="2026-01-28T01:25:55.174590860Z" level=error msg="encountered an error cleaning up failed sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.174679 containerd[1726]: time="2026-01-28T01:25:55.174648740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwqqh,Uid:b0ef4dca-fc9b-48e6-a83b-e247508a0b04,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.175911 kubelet[3198]: E0128 01:25:55.174792 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.175911 kubelet[3198]: E0128 01:25:55.174837 3198 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwqqh" Jan 28 01:25:55.175911 kubelet[3198]: E0128 01:25:55.174856 3198 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwqqh" Jan 28 01:25:55.176053 kubelet[3198]: E0128 01:25:55.174892 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:25:55.232410 containerd[1726]: time="2026-01-28T01:25:55.232356673Z" level=error msg="Failed to destroy network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.232788 containerd[1726]: time="2026-01-28T01:25:55.232760033Z" level=error msg="encountered an error cleaning up failed sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.234506 containerd[1726]: time="2026-01-28T01:25:55.232809873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5ldbp,Uid:36f60f1b-edc5-4d4b-8496-6ed810707a8c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.234870 kubelet[3198]: E0128 01:25:55.234716 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.234870 kubelet[3198]: E0128 01:25:55.234773 3198 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5ldbp" Jan 28 01:25:55.234870 kubelet[3198]: E0128 01:25:55.234789 3198 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5ldbp" Jan 28 01:25:55.235329 kubelet[3198]: E0128 01:25:55.234837 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5ldbp_kube-system(36f60f1b-edc5-4d4b-8496-6ed810707a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5ldbp_kube-system(36f60f1b-edc5-4d4b-8496-6ed810707a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5ldbp" podUID="36f60f1b-edc5-4d4b-8496-6ed810707a8c" Jan 28 01:25:55.235451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce-shm.mount: Deactivated successfully. Jan 28 01:25:55.340184 containerd[1726]: time="2026-01-28T01:25:55.340135155Z" level=error msg="Failed to destroy network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.342245 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716-shm.mount: Deactivated successfully. Jan 28 01:25:55.342690 containerd[1726]: time="2026-01-28T01:25:55.342472471Z" level=error msg="encountered an error cleaning up failed sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.342690 containerd[1726]: time="2026-01-28T01:25:55.342531151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84868d5f79-45qm5,Uid:79191197-2837-43fa-b284-2023c360b9e2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.343014 kubelet[3198]: E0128 01:25:55.342981 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:55.343079 kubelet[3198]: E0128 01:25:55.343036 3198 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" Jan 28 01:25:55.343079 kubelet[3198]: E0128 01:25:55.343055 3198 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" Jan 28 01:25:55.343125 kubelet[3198]: E0128 01:25:55.343096 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84868d5f79-45qm5_calico-apiserver(79191197-2837-43fa-b284-2023c360b9e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84868d5f79-45qm5_calico-apiserver(79191197-2837-43fa-b284-2023c360b9e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:25:56.083408 kubelet[3198]: I0128 01:25:56.083056 3198 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:25:56.202835 kubelet[3198]: I0128 01:25:56.085260 3198 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:25:56.202835 kubelet[3198]: I0128 01:25:56.087976 3198 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:25:56.202835 kubelet[3198]: E0128 01:25:56.121152 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:25:56.202835 kubelet[3198]: E0128 01:25:56.121198 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716"} Jan 28 01:25:56.202835 kubelet[3198]: E0128 01:25:56.121240 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79191197-2837-43fa-b284-2023c360b9e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:25:56.203060 containerd[1726]: time="2026-01-28T01:25:56.084776417Z" level=info msg="StopPodSandbox for \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\"" Jan 28 01:25:56.203060 containerd[1726]: time="2026-01-28T01:25:56.084942297Z" level=info msg="Ensure that sandbox 708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716 in task-service has been cleanup successfully" Jan 28 01:25:56.203060 containerd[1726]: time="2026-01-28T01:25:56.086266896Z" level=info msg="StopPodSandbox for \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\"" Jan 28 01:25:56.203060 containerd[1726]: time="2026-01-28T01:25:56.086838735Z" level=info msg="Ensure that sandbox 433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019 in task-service has been cleanup successfully" Jan 28 01:25:56.203060 containerd[1726]: time="2026-01-28T01:25:56.088547734Z" level=info msg="StopPodSandbox for \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\"" Jan 28 01:25:56.203060 containerd[1726]: time="2026-01-28T01:25:56.088671454Z" level=info msg="Ensure that sandbox c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce in task-service has been cleanup successfully" Jan 28 01:25:56.203060 containerd[1726]: time="2026-01-28T01:25:56.120904786Z" level=error msg="StopPodSandbox for \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\" failed" error="failed to destroy network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:56.203060 containerd[1726]: time="2026-01-28T01:25:56.130295418Z" level=error msg="StopPodSandbox for \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\" failed" error="failed to destroy network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:56.203060 containerd[1726]: time="2026-01-28T01:25:56.132060897Z" level=error msg="StopPodSandbox for \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\" failed" error="failed to destroy network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:25:56.203519 kubelet[3198]: E0128 01:25:56.121267 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79191197-2837-43fa-b284-2023c360b9e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:25:56.203519 kubelet[3198]: E0128 01:25:56.130560 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:25:56.203519 kubelet[3198]: E0128 01:25:56.130607 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce"} Jan 28 01:25:56.203519 kubelet[3198]: E0128 01:25:56.130638 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36f60f1b-edc5-4d4b-8496-6ed810707a8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:25:56.203645 kubelet[3198]: E0128 01:25:56.130658 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36f60f1b-edc5-4d4b-8496-6ed810707a8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5ldbp" podUID="36f60f1b-edc5-4d4b-8496-6ed810707a8c" Jan 28 01:25:56.203645 kubelet[3198]: E0128 01:25:56.132222 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:25:56.203645 kubelet[3198]: E0128 01:25:56.132252 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019"} Jan 28 01:25:56.203645 kubelet[3198]: E0128 01:25:56.132280 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0ef4dca-fc9b-48e6-a83b-e247508a0b04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:25:56.203758 kubelet[3198]: E0128 01:25:56.132297 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0ef4dca-fc9b-48e6-a83b-e247508a0b04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:26:05.963816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977669547.mount: Deactivated successfully. Jan 28 01:26:06.945219 containerd[1726]: time="2026-01-28T01:26:06.945172137Z" level=info msg="StopPodSandbox for \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\"" Jan 28 01:26:06.959206 containerd[1726]: time="2026-01-28T01:26:06.958905405Z" level=info msg="StopPodSandbox for \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\"" Jan 28 01:26:06.960079 containerd[1726]: time="2026-01-28T01:26:06.959963244Z" level=info msg="StopPodSandbox for \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\"" Jan 28 01:26:06.989925 containerd[1726]: time="2026-01-28T01:26:06.989866138Z" level=error msg="StopPodSandbox for \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\" failed" error="failed to destroy network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:26:06.994490 kubelet[3198]: E0128 01:26:06.994246 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:26:06.994490 kubelet[3198]: E0128 01:26:06.994331 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873"} Jan 28 01:26:06.994490 kubelet[3198]: E0128 01:26:06.994364 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50b57260-757d-49ab-b412-157457a311f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:26:06.994490 kubelet[3198]: E0128 01:26:06.994384 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50b57260-757d-49ab-b412-157457a311f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:26:07.008054 containerd[1726]: time="2026-01-28T01:26:07.007990162Z" level=error msg="StopPodSandbox for \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\" failed" error="failed to destroy network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:26:07.008810 kubelet[3198]: E0128 01:26:07.008498 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:26:07.008810 kubelet[3198]: E0128 01:26:07.008547 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce"} Jan 28 01:26:07.008810 kubelet[3198]: E0128 01:26:07.008578 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36f60f1b-edc5-4d4b-8496-6ed810707a8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:26:07.008810 kubelet[3198]: E0128 01:26:07.008597 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36f60f1b-edc5-4d4b-8496-6ed810707a8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5ldbp" podUID="36f60f1b-edc5-4d4b-8496-6ed810707a8c" Jan 28 01:26:07.012183 containerd[1726]: time="2026-01-28T01:26:07.012104119Z" level=error msg="StopPodSandbox for \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\" failed" error="failed to destroy network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:26:07.012273 kubelet[3198]: E0128 01:26:07.012237 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:07.012308 kubelet[3198]: E0128 01:26:07.012269 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4"} Jan 28 01:26:07.012308 kubelet[3198]: E0128 01:26:07.012295 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:26:07.012382 kubelet[3198]: E0128 01:26:07.012312 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bb97ccffd-9stf2" podUID="9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae" Jan 28 01:26:07.943003 containerd[1726]: time="2026-01-28T01:26:07.942797468Z" level=info msg="StopPodSandbox for \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\"" Jan 28 01:26:07.965603 containerd[1726]: time="2026-01-28T01:26:07.965530249Z" level=error msg="StopPodSandbox for \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\" failed" error="failed to destroy network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:26:07.966116 kubelet[3198]: E0128 01:26:07.966076 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:26:07.966189 kubelet[3198]: E0128 01:26:07.966131 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3"} Jan 28 01:26:07.966189 kubelet[3198]: E0128 01:26:07.966169 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19367d42-7907-4f04-8c63-bcae87fa9f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:26:07.966270 kubelet[3198]: E0128 01:26:07.966193 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19367d42-7907-4f04-8c63-bcae87fa9f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:26:08.948424 containerd[1726]: time="2026-01-28T01:26:08.948384060Z" level=info msg="StopPodSandbox for \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\"" Jan 28 01:26:08.952433 containerd[1726]: time="2026-01-28T01:26:08.951796817Z" level=info msg="StopPodSandbox for \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\"" Jan 28 01:26:09.001516 containerd[1726]: time="2026-01-28T01:26:09.000056583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:09.013492 containerd[1726]: time="2026-01-28T01:26:09.012832614Z" level=error msg="StopPodSandbox for \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\" failed" error="failed to destroy network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:26:09.014141 kubelet[3198]: E0128 01:26:09.014097 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:26:09.014414 kubelet[3198]: E0128 01:26:09.014152 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185"} Jan 28 01:26:09.014414 kubelet[3198]: E0128 01:26:09.014185 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"efad0924-58e2-470d-a190-d57cd8685e98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:26:09.014414 kubelet[3198]: E0128 01:26:09.014204 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"efad0924-58e2-470d-a190-d57cd8685e98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:26:09.019201 containerd[1726]: time="2026-01-28T01:26:09.019020770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 28 01:26:09.024693 containerd[1726]: time="2026-01-28T01:26:09.024649646Z" level=error msg="StopPodSandbox for \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\" failed" error="failed to destroy network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:26:09.024892 kubelet[3198]: E0128 01:26:09.024837 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:26:09.024946 kubelet[3198]: E0128 01:26:09.024899 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716"} Jan 28 01:26:09.024946 kubelet[3198]: E0128 01:26:09.024932 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79191197-2837-43fa-b284-2023c360b9e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:26:09.025031 kubelet[3198]: E0128 01:26:09.024953 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79191197-2837-43fa-b284-2023c360b9e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:26:09.054689 containerd[1726]: time="2026-01-28T01:26:09.053708985Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:09.100585 containerd[1726]: time="2026-01-28T01:26:09.100542832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:09.101737 containerd[1726]: time="2026-01-28T01:26:09.101418991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 17.039400875s" Jan 28 01:26:09.101852 containerd[1726]: time="2026-01-28T01:26:09.101835151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 28 01:26:09.154512 containerd[1726]: time="2026-01-28T01:26:09.154451474Z" level=info msg="CreateContainer within sandbox \"bae04dea93e4def94fa89140d0240ed9647361d5c2ece965b297fb658c209f89\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:26:09.758293 containerd[1726]: time="2026-01-28T01:26:09.758132246Z" level=info msg="CreateContainer within sandbox \"bae04dea93e4def94fa89140d0240ed9647361d5c2ece965b297fb658c209f89\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1c4be91cd68e119824a4328569c81b0154e835d8312d0c08bb48b99a8b19ffd1\"" Jan 28 01:26:09.759737 containerd[1726]: time="2026-01-28T01:26:09.759063005Z" level=info msg="StartContainer for \"1c4be91cd68e119824a4328569c81b0154e835d8312d0c08bb48b99a8b19ffd1\"" Jan 28 01:26:09.785608 systemd[1]: Started cri-containerd-1c4be91cd68e119824a4328569c81b0154e835d8312d0c08bb48b99a8b19ffd1.scope - libcontainer container 1c4be91cd68e119824a4328569c81b0154e835d8312d0c08bb48b99a8b19ffd1. Jan 28 01:26:09.943334 containerd[1726]: time="2026-01-28T01:26:09.943296555Z" level=info msg="StopPodSandbox for \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\"" Jan 28 01:26:09.967550 containerd[1726]: time="2026-01-28T01:26:09.967500418Z" level=error msg="StopPodSandbox for \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\" failed" error="failed to destroy network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:26:09.967933 kubelet[3198]: E0128 01:26:09.967714 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:26:09.967933 kubelet[3198]: E0128 01:26:09.967764 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee"} Jan 28 01:26:09.967933 kubelet[3198]: E0128 01:26:09.967797 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:26:09.967933 kubelet[3198]: E0128 01:26:09.967818 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fv4g9" podUID="e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6" Jan 28 01:26:10.543643 containerd[1726]: time="2026-01-28T01:26:10.543599490Z" level=info msg="StartContainer for \"1c4be91cd68e119824a4328569c81b0154e835d8312d0c08bb48b99a8b19ffd1\" returns successfully" Jan 28 01:26:10.575234 kubelet[3198]: I0128 01:26:10.574756 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2k28l" podStartSLOduration=1.95023923 podStartE2EDuration="29.574740388s" podCreationTimestamp="2026-01-28 01:25:41 +0000 UTC" firstStartedPulling="2026-01-28 01:25:41.478535872 +0000 UTC m=+24.641677014" lastFinishedPulling="2026-01-28 01:26:09.10303703 +0000 UTC m=+52.266178172" observedRunningTime="2026-01-28 01:26:10.573648189 +0000 UTC m=+53.736789371" watchObservedRunningTime="2026-01-28 01:26:10.574740388 +0000 UTC m=+53.737881530" Jan 28 01:26:10.928529 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:26:10.928649 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:26:10.944854 containerd[1726]: time="2026-01-28T01:26:10.944822926Z" level=info msg="StopPodSandbox for \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\"" Jan 28 01:26:10.973952 containerd[1726]: time="2026-01-28T01:26:10.973831185Z" level=error msg="StopPodSandbox for \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\" failed" error="failed to destroy network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:26:10.974119 kubelet[3198]: E0128 01:26:10.974058 3198 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:26:10.974175 kubelet[3198]: E0128 01:26:10.974121 3198 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019"} Jan 28 01:26:10.974175 kubelet[3198]: E0128 01:26:10.974162 3198 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0ef4dca-fc9b-48e6-a83b-e247508a0b04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:26:10.974268 kubelet[3198]: E0128 01:26:10.974182 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0ef4dca-fc9b-48e6-a83b-e247508a0b04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:26:11.057444 containerd[1726]: time="2026-01-28T01:26:11.057401326Z" level=info msg="StopPodSandbox for \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\"" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.142 [INFO][4571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.142 [INFO][4571] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" iface="eth0" netns="/var/run/netns/cni-9c290ce4-b056-6fa9-7189-fd390f26fc56" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.142 [INFO][4571] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" iface="eth0" netns="/var/run/netns/cni-9c290ce4-b056-6fa9-7189-fd390f26fc56" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.142 [INFO][4571] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" iface="eth0" netns="/var/run/netns/cni-9c290ce4-b056-6fa9-7189-fd390f26fc56" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.142 [INFO][4571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.142 [INFO][4571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.166 [INFO][4578] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" HandleID="k8s-pod-network.d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.167 [INFO][4578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.167 [INFO][4578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.179 [WARNING][4578] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" HandleID="k8s-pod-network.d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.179 [INFO][4578] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" HandleID="k8s-pod-network.d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.181 [INFO][4578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:11.185412 containerd[1726]: 2026-01-28 01:26:11.183 [INFO][4571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:11.185412 containerd[1726]: time="2026-01-28T01:26:11.185367595Z" level=info msg="TearDown network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\" successfully" Jan 28 01:26:11.185412 containerd[1726]: time="2026-01-28T01:26:11.185393755Z" level=info msg="StopPodSandbox for \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\" returns successfully" Jan 28 01:26:11.190064 systemd[1]: run-netns-cni\x2d9c290ce4\x2db056\x2d6fa9\x2d7189\x2dfd390f26fc56.mount: Deactivated successfully. Jan 28 01:26:11.253278 kubelet[3198]: I0128 01:26:11.251657 3198 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-whisker-backend-key-pair\") pod \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\" (UID: \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\") " Jan 28 01:26:11.253278 kubelet[3198]: I0128 01:26:11.251696 3198 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtpzm\" (UniqueName: \"kubernetes.io/projected/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-kube-api-access-rtpzm\") pod \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\" (UID: \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\") " Jan 28 01:26:11.253278 kubelet[3198]: I0128 01:26:11.251720 3198 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-whisker-ca-bundle\") pod \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\" (UID: \"9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae\") " Jan 28 01:26:11.253278 kubelet[3198]: I0128 01:26:11.252034 3198 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae" (UID: "9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:26:11.256681 kubelet[3198]: I0128 01:26:11.256644 3198 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae" (UID: "9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:26:11.257937 kubelet[3198]: I0128 01:26:11.257561 3198 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-kube-api-access-rtpzm" (OuterVolumeSpecName: "kube-api-access-rtpzm") pod "9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae" (UID: "9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae"). InnerVolumeSpecName "kube-api-access-rtpzm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:26:11.257972 systemd[1]: var-lib-kubelet-pods-9a2ab7a4\x2d4311\x2d43c2\x2dbd8b\x2df735f5bee0ae-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:26:11.258054 systemd[1]: var-lib-kubelet-pods-9a2ab7a4\x2d4311\x2d43c2\x2dbd8b\x2df735f5bee0ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtpzm.mount: Deactivated successfully. Jan 28 01:26:11.352252 kubelet[3198]: I0128 01:26:11.352187 3198 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-20d4350ff0\" DevicePath \"\"" Jan 28 01:26:11.352252 kubelet[3198]: I0128 01:26:11.352219 3198 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rtpzm\" (UniqueName: \"kubernetes.io/projected/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-kube-api-access-rtpzm\") on node \"ci-4081.3.6-n-20d4350ff0\" DevicePath \"\"" Jan 28 01:26:11.352252 kubelet[3198]: I0128 01:26:11.352230 3198 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae-whisker-ca-bundle\") on node \"ci-4081.3.6-n-20d4350ff0\" DevicePath \"\"" Jan 28 01:26:11.556109 systemd[1]: Removed slice kubepods-besteffort-pod9a2ab7a4_4311_43c2_bd8b_f735f5bee0ae.slice - libcontainer container kubepods-besteffort-pod9a2ab7a4_4311_43c2_bd8b_f735f5bee0ae.slice. Jan 28 01:26:11.637982 systemd[1]: Created slice kubepods-besteffort-pod7986b68d_2b69_4fd3_a1ac_2bbd1d928663.slice - libcontainer container kubepods-besteffort-pod7986b68d_2b69_4fd3_a1ac_2bbd1d928663.slice. Jan 28 01:26:11.653295 kubelet[3198]: I0128 01:26:11.653262 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2wd\" (UniqueName: \"kubernetes.io/projected/7986b68d-2b69-4fd3-a1ac-2bbd1d928663-kube-api-access-8l2wd\") pod \"whisker-766b799ccb-m5599\" (UID: \"7986b68d-2b69-4fd3-a1ac-2bbd1d928663\") " pod="calico-system/whisker-766b799ccb-m5599" Jan 28 01:26:11.654133 kubelet[3198]: I0128 01:26:11.653726 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7986b68d-2b69-4fd3-a1ac-2bbd1d928663-whisker-backend-key-pair\") pod \"whisker-766b799ccb-m5599\" (UID: \"7986b68d-2b69-4fd3-a1ac-2bbd1d928663\") " pod="calico-system/whisker-766b799ccb-m5599" Jan 28 01:26:11.654275 kubelet[3198]: I0128 01:26:11.654258 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7986b68d-2b69-4fd3-a1ac-2bbd1d928663-whisker-ca-bundle\") pod \"whisker-766b799ccb-m5599\" (UID: \"7986b68d-2b69-4fd3-a1ac-2bbd1d928663\") " pod="calico-system/whisker-766b799ccb-m5599" Jan 28 01:26:11.942359 containerd[1726]: time="2026-01-28T01:26:11.942257939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-766b799ccb-m5599,Uid:7986b68d-2b69-4fd3-a1ac-2bbd1d928663,Namespace:calico-system,Attempt:0,}" Jan 28 01:26:12.629543 kernel: bpftool[4742]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 01:26:12.920522 systemd-networkd[1361]: vxlan.calico: Link UP Jan 28 01:26:12.920529 systemd-networkd[1361]: vxlan.calico: Gained carrier Jan 28 01:26:12.945176 kubelet[3198]: I0128 01:26:12.944975 3198 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae" path="/var/lib/kubelet/pods/9a2ab7a4-4311-43c2-bd8b-f735f5bee0ae/volumes" Jan 28 01:26:13.461289 systemd-networkd[1361]: califc713499d92: Link UP Jan 28 01:26:13.462220 systemd-networkd[1361]: califc713499d92: Gained carrier Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.379 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0 whisker-766b799ccb- calico-system 7986b68d-2b69-4fd3-a1ac-2bbd1d928663 938 0 2026-01-28 01:26:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:766b799ccb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-20d4350ff0 whisker-766b799ccb-m5599 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califc713499d92 [] [] }} ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Namespace="calico-system" Pod="whisker-766b799ccb-m5599" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.379 [INFO][4787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Namespace="calico-system" Pod="whisker-766b799ccb-m5599" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.406 [INFO][4802] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" HandleID="k8s-pod-network.6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.407 [INFO][4802] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" HandleID="k8s-pod-network.6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d30d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-20d4350ff0", "pod":"whisker-766b799ccb-m5599", "timestamp":"2026-01-28 01:26:13.406911702 +0000 UTC"}, Hostname:"ci-4081.3.6-n-20d4350ff0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.407 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.407 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.407 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-20d4350ff0' Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.417 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.421 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.425 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.427 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.429 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.429 [INFO][4802] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.430 [INFO][4802] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037 Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.438 [INFO][4802] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.448 [INFO][4802] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.65/26] block=192.168.9.64/26 handle="k8s-pod-network.6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.449 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.65/26] handle="k8s-pod-network.6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.449 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:13.487968 containerd[1726]: 2026-01-28 01:26:13.449 [INFO][4802] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.65/26] IPv6=[] ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" HandleID="k8s-pod-network.6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" Jan 28 01:26:13.489664 containerd[1726]: 2026-01-28 01:26:13.452 [INFO][4787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Namespace="calico-system" Pod="whisker-766b799ccb-m5599" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0", GenerateName:"whisker-766b799ccb-", Namespace:"calico-system", SelfLink:"", UID:"7986b68d-2b69-4fd3-a1ac-2bbd1d928663", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"766b799ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"", Pod:"whisker-766b799ccb-m5599", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califc713499d92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:13.489664 containerd[1726]: 2026-01-28 01:26:13.452 [INFO][4787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.65/32] ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Namespace="calico-system" Pod="whisker-766b799ccb-m5599" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" Jan 28 01:26:13.489664 containerd[1726]: 2026-01-28 01:26:13.452 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc713499d92 ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Namespace="calico-system" Pod="whisker-766b799ccb-m5599" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" Jan 28 01:26:13.489664 containerd[1726]: 2026-01-28 01:26:13.462 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Namespace="calico-system" Pod="whisker-766b799ccb-m5599" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" Jan 28 01:26:13.489664 containerd[1726]: 2026-01-28 01:26:13.463 [INFO][4787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Namespace="calico-system" Pod="whisker-766b799ccb-m5599" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0", GenerateName:"whisker-766b799ccb-", Namespace:"calico-system", SelfLink:"", UID:"7986b68d-2b69-4fd3-a1ac-2bbd1d928663", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"766b799ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037", Pod:"whisker-766b799ccb-m5599", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califc713499d92", MAC:"fa:46:95:0e:dd:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:13.489664 containerd[1726]: 2026-01-28 01:26:13.476 [INFO][4787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037" Namespace="calico-system" Pod="whisker-766b799ccb-m5599" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--766b799ccb--m5599-eth0" Jan 28 01:26:13.520217 containerd[1726]: time="2026-01-28T01:26:13.520134422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:13.520448 containerd[1726]: time="2026-01-28T01:26:13.520231022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:13.521094 containerd[1726]: time="2026-01-28T01:26:13.520808382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:13.521300 containerd[1726]: time="2026-01-28T01:26:13.521235381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:13.545298 systemd[1]: run-containerd-runc-k8s.io-6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037-runc.cgOL9J.mount: Deactivated successfully. Jan 28 01:26:13.555645 systemd[1]: Started cri-containerd-6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037.scope - libcontainer container 6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037. Jan 28 01:26:13.587453 containerd[1726]: time="2026-01-28T01:26:13.587410574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-766b799ccb-m5599,Uid:7986b68d-2b69-4fd3-a1ac-2bbd1d928663,Namespace:calico-system,Attempt:0,} returns sandbox id \"6479ef1199cdd4973816b5bd8fae576331a0c276ff440ebf33ce491ee3620037\"" Jan 28 01:26:13.589672 containerd[1726]: time="2026-01-28T01:26:13.589136853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:26:13.852053 containerd[1726]: time="2026-01-28T01:26:13.852006307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:13.856249 containerd[1726]: time="2026-01-28T01:26:13.856195344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:26:13.856607 containerd[1726]: time="2026-01-28T01:26:13.856304904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:26:13.856657 kubelet[3198]: E0128 01:26:13.856454 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:13.856657 kubelet[3198]: E0128 01:26:13.856525 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:13.860327 kubelet[3198]: E0128 01:26:13.860277 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:41cd963ee3c14a94bb038663169e4951,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8l2wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-766b799ccb-m5599_calico-system(7986b68d-2b69-4fd3-a1ac-2bbd1d928663): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:13.862951 containerd[1726]: time="2026-01-28T01:26:13.862920859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:26:13.974073 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Jan 28 01:26:14.140836 containerd[1726]: time="2026-01-28T01:26:14.140576183Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:14.151419 containerd[1726]: time="2026-01-28T01:26:14.151273095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:26:14.151419 containerd[1726]: time="2026-01-28T01:26:14.151385055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:26:14.151604 kubelet[3198]: E0128 01:26:14.151537 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:14.151604 kubelet[3198]: E0128 01:26:14.151582 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:14.151882 kubelet[3198]: E0128 01:26:14.151679 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l2wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-766b799ccb-m5599_calico-system(7986b68d-2b69-4fd3-a1ac-2bbd1d928663): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:14.153079 kubelet[3198]: E0128 01:26:14.153025 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:26:14.559654 kubelet[3198]: E0128 01:26:14.559592 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:26:15.125652 systemd-networkd[1361]: califc713499d92: Gained IPv6LL Jan 28 01:26:15.562682 kubelet[3198]: E0128 01:26:15.562618 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:26:16.962412 containerd[1726]: time="2026-01-28T01:26:16.962381540Z" level=info msg="StopPodSandbox for \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\"" Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.000 [WARNING][4897] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.000 [INFO][4897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.000 [INFO][4897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" iface="eth0" netns="" Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.000 [INFO][4897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.000 [INFO][4897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.025 [INFO][4904] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" HandleID="k8s-pod-network.d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.025 [INFO][4904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.025 [INFO][4904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.036 [WARNING][4904] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" HandleID="k8s-pod-network.d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.036 [INFO][4904] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" HandleID="k8s-pod-network.d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.037 [INFO][4904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:17.041126 containerd[1726]: 2026-01-28 01:26:17.039 [INFO][4897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:17.041126 containerd[1726]: time="2026-01-28T01:26:17.040925479Z" level=info msg="TearDown network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\" successfully" Jan 28 01:26:17.041126 containerd[1726]: time="2026-01-28T01:26:17.040948479Z" level=info msg="StopPodSandbox for \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\" returns successfully" Jan 28 01:26:17.041818 containerd[1726]: time="2026-01-28T01:26:17.041483359Z" level=info msg="RemovePodSandbox for \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\"" Jan 28 01:26:17.043821 containerd[1726]: time="2026-01-28T01:26:17.043790357Z" level=info msg="Forcibly stopping sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\"" Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.074 [WARNING][4918] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.075 [INFO][4918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.075 [INFO][4918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" iface="eth0" netns="" Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.075 [INFO][4918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.075 [INFO][4918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.093 [INFO][4925] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" HandleID="k8s-pod-network.d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.093 [INFO][4925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.093 [INFO][4925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.101 [WARNING][4925] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" HandleID="k8s-pod-network.d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.101 [INFO][4925] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" HandleID="k8s-pod-network.d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Workload="ci--4081.3.6--n--20d4350ff0-k8s-whisker--6bb97ccffd--9stf2-eth0" Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.102 [INFO][4925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:17.107528 containerd[1726]: 2026-01-28 01:26:17.104 [INFO][4918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4" Jan 28 01:26:17.107528 containerd[1726]: time="2026-01-28T01:26:17.106591348Z" level=info msg="TearDown network for sandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\" successfully" Jan 28 01:26:17.147899 containerd[1726]: time="2026-01-28T01:26:17.147858876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:26:17.148082 containerd[1726]: time="2026-01-28T01:26:17.148066276Z" level=info msg="RemovePodSandbox \"d5fa88db5e6502a455b9b9eea96376e3d8c88ffe83394b56ea428f49d8f8d6c4\" returns successfully" Jan 28 01:26:19.943484 containerd[1726]: time="2026-01-28T01:26:19.943151069Z" level=info msg="StopPodSandbox for \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\"" Jan 28 01:26:19.944207 containerd[1726]: time="2026-01-28T01:26:19.943873589Z" level=info msg="StopPodSandbox for \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\"" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.008 [INFO][4958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.008 [INFO][4958] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" iface="eth0" netns="/var/run/netns/cni-48f48aff-037a-457a-c2fa-a245850fe116" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.008 [INFO][4958] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" iface="eth0" netns="/var/run/netns/cni-48f48aff-037a-457a-c2fa-a245850fe116" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.009 [INFO][4958] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" iface="eth0" netns="/var/run/netns/cni-48f48aff-037a-457a-c2fa-a245850fe116" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.009 [INFO][4958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.009 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.033 [INFO][4971] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" HandleID="k8s-pod-network.708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.034 [INFO][4971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.034 [INFO][4971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.042 [WARNING][4971] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" HandleID="k8s-pod-network.708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.042 [INFO][4971] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" HandleID="k8s-pod-network.708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.043 [INFO][4971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:20.047071 containerd[1726]: 2026-01-28 01:26:20.045 [INFO][4958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:26:20.048098 containerd[1726]: time="2026-01-28T01:26:20.048064068Z" level=info msg="TearDown network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\" successfully" Jan 28 01:26:20.050410 containerd[1726]: time="2026-01-28T01:26:20.050363106Z" level=info msg="StopPodSandbox for \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\" returns successfully" Jan 28 01:26:20.051635 systemd[1]: run-netns-cni\x2d48f48aff\x2d037a\x2d457a\x2dc2fa\x2da245850fe116.mount: Deactivated successfully. Jan 28 01:26:20.053572 containerd[1726]: time="2026-01-28T01:26:20.052955584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84868d5f79-45qm5,Uid:79191197-2837-43fa-b284-2023c360b9e2,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.004 [INFO][4957] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.005 [INFO][4957] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" iface="eth0" netns="/var/run/netns/cni-927eac85-c85d-c517-ec53-60539d0c5899" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.008 [INFO][4957] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" iface="eth0" netns="/var/run/netns/cni-927eac85-c85d-c517-ec53-60539d0c5899" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.012 [INFO][4957] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" iface="eth0" netns="/var/run/netns/cni-927eac85-c85d-c517-ec53-60539d0c5899" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.012 [INFO][4957] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.012 [INFO][4957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.035 [INFO][4976] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" HandleID="k8s-pod-network.413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.036 [INFO][4976] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.043 [INFO][4976] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.055 [WARNING][4976] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" HandleID="k8s-pod-network.413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.055 [INFO][4976] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" HandleID="k8s-pod-network.413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.056 [INFO][4976] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:20.060914 containerd[1726]: 2026-01-28 01:26:20.058 [INFO][4957] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:26:20.061807 containerd[1726]: time="2026-01-28T01:26:20.061357938Z" level=info msg="TearDown network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\" successfully" Jan 28 01:26:20.061807 containerd[1726]: time="2026-01-28T01:26:20.061380378Z" level=info msg="StopPodSandbox for \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\" returns successfully" Jan 28 01:26:20.062239 containerd[1726]: time="2026-01-28T01:26:20.062031217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84868d5f79-sv4mj,Uid:19367d42-7907-4f04-8c63-bcae87fa9f82,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:26:20.065374 systemd[1]: run-netns-cni\x2d927eac85\x2dc85d\x2dc517\x2dec53\x2d60539d0c5899.mount: Deactivated successfully. Jan 28 01:26:20.758043 systemd-networkd[1361]: calib29207a793b: Link UP Jan 28 01:26:20.760197 systemd-networkd[1361]: calib29207a793b: Gained carrier Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.674 [INFO][4985] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0 calico-apiserver-84868d5f79- calico-apiserver 79191197-2837-43fa-b284-2023c360b9e2 983 0 2026-01-28 01:25:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84868d5f79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-20d4350ff0 calico-apiserver-84868d5f79-45qm5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib29207a793b [] [] }} ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-45qm5" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.674 [INFO][4985] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-45qm5" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.700 [INFO][4997] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" HandleID="k8s-pod-network.401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.701 [INFO][4997] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" HandleID="k8s-pod-network.401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-20d4350ff0", "pod":"calico-apiserver-84868d5f79-45qm5", "timestamp":"2026-01-28 01:26:20.700908682 +0000 UTC"}, Hostname:"ci-4081.3.6-n-20d4350ff0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.701 [INFO][4997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.701 [INFO][4997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.701 [INFO][4997] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-20d4350ff0' Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.713 [INFO][4997] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.718 [INFO][4997] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.724 [INFO][4997] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.725 [INFO][4997] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.728 [INFO][4997] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.728 [INFO][4997] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.731 [INFO][4997] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21 Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.741 [INFO][4997] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.750 [INFO][4997] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.66/26] block=192.168.9.64/26 handle="k8s-pod-network.401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.750 [INFO][4997] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.66/26] handle="k8s-pod-network.401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.750 [INFO][4997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:20.784443 containerd[1726]: 2026-01-28 01:26:20.750 [INFO][4997] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.66/26] IPv6=[] ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" HandleID="k8s-pod-network.401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.787322 containerd[1726]: 2026-01-28 01:26:20.753 [INFO][4985] cni-plugin/k8s.go 418: Populated endpoint ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-45qm5" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0", GenerateName:"calico-apiserver-84868d5f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"79191197-2837-43fa-b284-2023c360b9e2", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84868d5f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"", Pod:"calico-apiserver-84868d5f79-45qm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib29207a793b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:20.787322 containerd[1726]: 2026-01-28 01:26:20.753 [INFO][4985] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.66/32] ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-45qm5" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.787322 containerd[1726]: 2026-01-28 01:26:20.753 [INFO][4985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib29207a793b ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-45qm5" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.787322 containerd[1726]: 2026-01-28 01:26:20.760 [INFO][4985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-45qm5" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.787322 containerd[1726]: 2026-01-28 01:26:20.764 [INFO][4985] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-45qm5" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0", GenerateName:"calico-apiserver-84868d5f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"79191197-2837-43fa-b284-2023c360b9e2", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84868d5f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21", Pod:"calico-apiserver-84868d5f79-45qm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib29207a793b", MAC:"66:42:6a:2e:95:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:20.787322 containerd[1726]: 2026-01-28 01:26:20.777 [INFO][4985] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-45qm5" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:26:20.809627 containerd[1726]: time="2026-01-28T01:26:20.809237878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:20.809627 containerd[1726]: time="2026-01-28T01:26:20.809333958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:20.809627 containerd[1726]: time="2026-01-28T01:26:20.809485798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:20.809627 containerd[1726]: time="2026-01-28T01:26:20.809612158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:20.836628 systemd[1]: Started cri-containerd-401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21.scope - libcontainer container 401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21. Jan 28 01:26:20.861574 systemd-networkd[1361]: cali536053265ae: Link UP Jan 28 01:26:20.862325 systemd-networkd[1361]: cali536053265ae: Gained carrier Jan 28 01:26:20.896499 containerd[1726]: time="2026-01-28T01:26:20.896325970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84868d5f79-45qm5,Uid:79191197-2837-43fa-b284-2023c360b9e2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21\"" Jan 28 01:26:20.901487 containerd[1726]: time="2026-01-28T01:26:20.900862007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.732 [INFO][5003] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0 calico-apiserver-84868d5f79- calico-apiserver 19367d42-7907-4f04-8c63-bcae87fa9f82 982 0 2026-01-28 01:25:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84868d5f79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-20d4350ff0 calico-apiserver-84868d5f79-sv4mj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali536053265ae [] [] }} ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-sv4mj" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.732 [INFO][5003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-sv4mj" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.787 [INFO][5016] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" HandleID="k8s-pod-network.bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.787 [INFO][5016] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" HandleID="k8s-pod-network.bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bf10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-20d4350ff0", "pod":"calico-apiserver-84868d5f79-sv4mj", "timestamp":"2026-01-28 01:26:20.787453575 +0000 UTC"}, Hostname:"ci-4081.3.6-n-20d4350ff0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.787 [INFO][5016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.787 [INFO][5016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.787 [INFO][5016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-20d4350ff0' Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.813 [INFO][5016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.821 [INFO][5016] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.826 [INFO][5016] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.829 [INFO][5016] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.832 [INFO][5016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.832 [INFO][5016] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.835 [INFO][5016] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.842 [INFO][5016] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.853 [INFO][5016] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.67/26] block=192.168.9.64/26 handle="k8s-pod-network.bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.853 [INFO][5016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.67/26] handle="k8s-pod-network.bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.853 [INFO][5016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:20.903320 containerd[1726]: 2026-01-28 01:26:20.853 [INFO][5016] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.67/26] IPv6=[] ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" HandleID="k8s-pod-network.bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.903915 containerd[1726]: 2026-01-28 01:26:20.856 [INFO][5003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-sv4mj" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0", GenerateName:"calico-apiserver-84868d5f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"19367d42-7907-4f04-8c63-bcae87fa9f82", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84868d5f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"", Pod:"calico-apiserver-84868d5f79-sv4mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali536053265ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:20.903915 containerd[1726]: 2026-01-28 01:26:20.857 [INFO][5003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.67/32] ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-sv4mj" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.903915 containerd[1726]: 2026-01-28 01:26:20.857 [INFO][5003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali536053265ae ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-sv4mj" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.903915 containerd[1726]: 2026-01-28 01:26:20.864 [INFO][5003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-sv4mj" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.903915 containerd[1726]: 2026-01-28 01:26:20.864 [INFO][5003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-sv4mj" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0", GenerateName:"calico-apiserver-84868d5f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"19367d42-7907-4f04-8c63-bcae87fa9f82", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84868d5f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc", Pod:"calico-apiserver-84868d5f79-sv4mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali536053265ae", MAC:"1a:b2:17:00:11:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:20.903915 containerd[1726]: 2026-01-28 01:26:20.897 [INFO][5003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc" Namespace="calico-apiserver" Pod="calico-apiserver-84868d5f79-sv4mj" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:26:20.947585 containerd[1726]: time="2026-01-28T01:26:20.946689811Z" level=info msg="StopPodSandbox for \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\"" Jan 28 01:26:20.948086 containerd[1726]: time="2026-01-28T01:26:20.947990450Z" level=info msg="StopPodSandbox for \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\"" Jan 28 01:26:20.948394 containerd[1726]: time="2026-01-28T01:26:20.948365050Z" level=info msg="StopPodSandbox for \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\"" Jan 28 01:26:20.950264 containerd[1726]: time="2026-01-28T01:26:20.949938729Z" level=info msg="StopPodSandbox for \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\"" Jan 28 01:26:21.006148 containerd[1726]: time="2026-01-28T01:26:21.006061925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:21.006334 containerd[1726]: time="2026-01-28T01:26:21.006310765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:21.006438 containerd[1726]: time="2026-01-28T01:26:21.006414405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:21.006704 containerd[1726]: time="2026-01-28T01:26:21.006602005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:21.093498 systemd[1]: Started cri-containerd-bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc.scope - libcontainer container bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc. Jan 28 01:26:21.158502 containerd[1726]: time="2026-01-28T01:26:21.158331247Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.073 [INFO][5122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.594 [INFO][5122] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" iface="eth0" netns="/var/run/netns/cni-f115077a-8d5d-baa4-9054-745c6f272bbd" Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.594 [INFO][5122] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" iface="eth0" netns="/var/run/netns/cni-f115077a-8d5d-baa4-9054-745c6f272bbd" Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.595 [INFO][5122] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" iface="eth0" netns="/var/run/netns/cni-f115077a-8d5d-baa4-9054-745c6f272bbd" Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.595 [INFO][5122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.595 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.634 [INFO][5182] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" HandleID="k8s-pod-network.d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.635 [INFO][5182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.635 [INFO][5182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.649 [WARNING][5182] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" HandleID="k8s-pod-network.d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.649 [INFO][5182] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" HandleID="k8s-pod-network.d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.651 [INFO][5182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:21.658513 containerd[1726]: 2026-01-28 01:26:21.655 [INFO][5122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.105 [INFO][5121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.594 [INFO][5121] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" iface="eth0" netns="/var/run/netns/cni-b58e8b1e-729a-b568-5b32-6ab2f373d985" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.594 [INFO][5121] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" iface="eth0" netns="/var/run/netns/cni-b58e8b1e-729a-b568-5b32-6ab2f373d985" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.596 [INFO][5121] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" iface="eth0" netns="/var/run/netns/cni-b58e8b1e-729a-b568-5b32-6ab2f373d985" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.596 [INFO][5121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.596 [INFO][5121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.640 [INFO][5186] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" HandleID="k8s-pod-network.3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.640 [INFO][5186] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.651 [INFO][5186] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.664 [WARNING][5186] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" HandleID="k8s-pod-network.3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.664 [INFO][5186] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" HandleID="k8s-pod-network.3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.666 [INFO][5186] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:21.670277 containerd[1726]: 2026-01-28 01:26:21.668 [INFO][5121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.126 [INFO][5120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.594 [INFO][5120] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" iface="eth0" netns="/var/run/netns/cni-4dc9f2c2-cab8-b8f5-6a69-10f5e23cf2e9" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.594 [INFO][5120] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" iface="eth0" netns="/var/run/netns/cni-4dc9f2c2-cab8-b8f5-6a69-10f5e23cf2e9" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.595 [INFO][5120] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" iface="eth0" netns="/var/run/netns/cni-4dc9f2c2-cab8-b8f5-6a69-10f5e23cf2e9" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.595 [INFO][5120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.595 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.647 [INFO][5183] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" HandleID="k8s-pod-network.c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.648 [INFO][5183] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.666 [INFO][5183] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.675 [WARNING][5183] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" HandleID="k8s-pod-network.c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.675 [INFO][5183] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" HandleID="k8s-pod-network.c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.677 [INFO][5183] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:21.682152 containerd[1726]: 2026-01-28 01:26:21.680 [INFO][5120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.138 [INFO][5124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.595 [INFO][5124] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" iface="eth0" netns="/var/run/netns/cni-12733f2b-2a42-d3a3-bcff-4989cddfbb19" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.596 [INFO][5124] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" iface="eth0" netns="/var/run/netns/cni-12733f2b-2a42-d3a3-bcff-4989cddfbb19" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.597 [INFO][5124] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" iface="eth0" netns="/var/run/netns/cni-12733f2b-2a42-d3a3-bcff-4989cddfbb19" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.597 [INFO][5124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.597 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.649 [INFO][5187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" HandleID="k8s-pod-network.89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.650 [INFO][5187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.677 [INFO][5187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.686 [WARNING][5187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" HandleID="k8s-pod-network.89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.686 [INFO][5187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" HandleID="k8s-pod-network.89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.687 [INFO][5187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:21.691394 containerd[1726]: 2026-01-28 01:26:21.689 [INFO][5124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:26:21.754774 containerd[1726]: time="2026-01-28T01:26:21.754501105Z" level=info msg="TearDown network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\" successfully" Jan 28 01:26:21.754774 containerd[1726]: time="2026-01-28T01:26:21.754537345Z" level=info msg="StopPodSandbox for \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\" returns successfully" Jan 28 01:26:21.754774 containerd[1726]: time="2026-01-28T01:26:21.754616145Z" level=info msg="TearDown network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\" successfully" Jan 28 01:26:21.754774 containerd[1726]: time="2026-01-28T01:26:21.754625545Z" level=info msg="StopPodSandbox for \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\" returns successfully" Jan 28 01:26:21.754774 containerd[1726]: time="2026-01-28T01:26:21.754650705Z" level=info msg="TearDown network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\" successfully" Jan 28 01:26:21.754774 containerd[1726]: time="2026-01-28T01:26:21.754659105Z" level=info msg="StopPodSandbox for \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\" returns successfully" Jan 28 01:26:21.755184 containerd[1726]: time="2026-01-28T01:26:21.755089865Z" level=info msg="TearDown network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\" successfully" Jan 28 01:26:21.755184 containerd[1726]: time="2026-01-28T01:26:21.755121225Z" level=info msg="StopPodSandbox for \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\" returns successfully" Jan 28 01:26:21.757476 containerd[1726]: time="2026-01-28T01:26:21.755625744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5ldbp,Uid:36f60f1b-edc5-4d4b-8496-6ed810707a8c,Namespace:kube-system,Attempt:1,}" Jan 28 01:26:21.757209 systemd[1]: run-netns-cni\x2d4dc9f2c2\x2dcab8\x2db8f5\x2d6a69\x2d10f5e23cf2e9.mount: Deactivated successfully. Jan 28 01:26:21.757298 systemd[1]: run-netns-cni\x2df115077a\x2d8d5d\x2dbaa4\x2d9054\x2d745c6f272bbd.mount: Deactivated successfully. Jan 28 01:26:21.757346 systemd[1]: run-netns-cni\x2db58e8b1e\x2d729a\x2db568\x2d5b32\x2d6ab2f373d985.mount: Deactivated successfully. Jan 28 01:26:21.757398 systemd[1]: run-netns-cni\x2d12733f2b\x2d2a42\x2dd3a3\x2dbcff\x2d4989cddfbb19.mount: Deactivated successfully. Jan 28 01:26:21.758161 containerd[1726]: time="2026-01-28T01:26:21.757879702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mm8vq,Uid:50b57260-757d-49ab-b412-157457a311f9,Namespace:calico-system,Attempt:1,}" Jan 28 01:26:21.758161 containerd[1726]: time="2026-01-28T01:26:21.757894942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6944bcdb-mk9w8,Uid:efad0924-58e2-470d-a190-d57cd8685e98,Namespace:calico-system,Attempt:1,}" Jan 28 01:26:21.758260 containerd[1726]: time="2026-01-28T01:26:21.758234542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fv4g9,Uid:e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6,Namespace:kube-system,Attempt:1,}" Jan 28 01:26:21.845566 systemd-networkd[1361]: calib29207a793b: Gained IPv6LL Jan 28 01:26:21.917307 containerd[1726]: time="2026-01-28T01:26:21.917071499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84868d5f79-sv4mj,Uid:19367d42-7907-4f04-8c63-bcae87fa9f82,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc\"" Jan 28 01:26:22.294225 systemd-networkd[1361]: cali536053265ae: Gained IPv6LL Jan 28 01:26:23.338018 containerd[1726]: time="2026-01-28T01:26:23.337904718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:23.338430 containerd[1726]: time="2026-01-28T01:26:23.338031277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:26:23.338637 kubelet[3198]: E0128 01:26:23.338385 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:23.338637 kubelet[3198]: E0128 01:26:23.338430 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:23.340442 containerd[1726]: time="2026-01-28T01:26:23.338829197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:23.344337 kubelet[3198]: E0128 01:26:23.344279 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmhzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84868d5f79-45qm5_calico-apiserver(79191197-2837-43fa-b284-2023c360b9e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:23.345526 kubelet[3198]: E0128 01:26:23.345488 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:26:23.585921 kubelet[3198]: E0128 01:26:23.585700 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:26:23.943078 containerd[1726]: time="2026-01-28T01:26:23.942796129Z" level=info msg="StopPodSandbox for \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\"" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:23.990 [INFO][5236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:23.990 [INFO][5236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" iface="eth0" netns="/var/run/netns/cni-e9be92a7-b6cb-15f0-e4d7-4352c31449b0" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:23.991 [INFO][5236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" iface="eth0" netns="/var/run/netns/cni-e9be92a7-b6cb-15f0-e4d7-4352c31449b0" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:23.991 [INFO][5236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" iface="eth0" netns="/var/run/netns/cni-e9be92a7-b6cb-15f0-e4d7-4352c31449b0" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:23.991 [INFO][5236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:23.991 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:24.009 [INFO][5243] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" HandleID="k8s-pod-network.433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:24.009 [INFO][5243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:24.009 [INFO][5243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:24.018 [WARNING][5243] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" HandleID="k8s-pod-network.433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:24.018 [INFO][5243] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" HandleID="k8s-pod-network.433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:24.019 [INFO][5243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:24.023621 containerd[1726]: 2026-01-28 01:26:24.021 [INFO][5236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:26:24.025099 containerd[1726]: time="2026-01-28T01:26:24.024973025Z" level=info msg="TearDown network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\" successfully" Jan 28 01:26:24.025099 containerd[1726]: time="2026-01-28T01:26:24.025003545Z" level=info msg="StopPodSandbox for \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\" returns successfully" Jan 28 01:26:24.027095 containerd[1726]: time="2026-01-28T01:26:24.027063423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwqqh,Uid:b0ef4dca-fc9b-48e6-a83b-e247508a0b04,Namespace:calico-system,Attempt:1,}" Jan 28 01:26:24.027251 systemd[1]: run-netns-cni\x2de9be92a7\x2db6cb\x2d15f0\x2de4d7\x2d4352c31449b0.mount: Deactivated successfully. Jan 28 01:26:25.209781 containerd[1726]: time="2026-01-28T01:26:25.209737560Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:25.544239 containerd[1726]: time="2026-01-28T01:26:25.543973841Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:25.544239 containerd[1726]: time="2026-01-28T01:26:25.544127721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:26:25.544382 kubelet[3198]: E0128 01:26:25.544293 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:25.544382 kubelet[3198]: E0128 01:26:25.544350 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:25.544680 kubelet[3198]: E0128 01:26:25.544634 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhj9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84868d5f79-sv4mj_calico-apiserver(19367d42-7907-4f04-8c63-bcae87fa9f82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:25.545917 kubelet[3198]: E0128 01:26:25.545856 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:26:25.591084 kubelet[3198]: E0128 01:26:25.591040 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:26:25.753823 systemd-networkd[1361]: calib4c9379b03b: Link UP Jan 28 01:26:25.754003 systemd-networkd[1361]: calib4c9379b03b: Gained carrier Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.679 [INFO][5251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0 coredns-668d6bf9bc- kube-system 36f60f1b-edc5-4d4b-8496-6ed810707a8c 998 0 2026-01-28 01:25:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-20d4350ff0 coredns-668d6bf9bc-5ldbp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib4c9379b03b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Namespace="kube-system" Pod="coredns-668d6bf9bc-5ldbp" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.680 [INFO][5251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Namespace="kube-system" Pod="coredns-668d6bf9bc-5ldbp" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.700 [INFO][5263] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" HandleID="k8s-pod-network.3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.700 [INFO][5263] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" HandleID="k8s-pod-network.3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-20d4350ff0", "pod":"coredns-668d6bf9bc-5ldbp", "timestamp":"2026-01-28 01:26:25.700821488 +0000 UTC"}, Hostname:"ci-4081.3.6-n-20d4350ff0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.701 [INFO][5263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.701 [INFO][5263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.701 [INFO][5263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-20d4350ff0' Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.713 [INFO][5263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.717 [INFO][5263] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.721 [INFO][5263] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.722 [INFO][5263] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.724 [INFO][5263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.724 [INFO][5263] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.725 [INFO][5263] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976 Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.738 [INFO][5263] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.748 [INFO][5263] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.68/26] block=192.168.9.64/26 handle="k8s-pod-network.3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.748 [INFO][5263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.68/26] handle="k8s-pod-network.3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.748 [INFO][5263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:25.773194 containerd[1726]: 2026-01-28 01:26:25.748 [INFO][5263] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.68/26] IPv6=[] ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" HandleID="k8s-pod-network.3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:25.774758 containerd[1726]: 2026-01-28 01:26:25.750 [INFO][5251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Namespace="kube-system" Pod="coredns-668d6bf9bc-5ldbp" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"36f60f1b-edc5-4d4b-8496-6ed810707a8c", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"", Pod:"coredns-668d6bf9bc-5ldbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c9379b03b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:25.774758 containerd[1726]: 2026-01-28 01:26:25.750 [INFO][5251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.68/32] ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Namespace="kube-system" Pod="coredns-668d6bf9bc-5ldbp" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:25.774758 containerd[1726]: 2026-01-28 01:26:25.750 [INFO][5251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4c9379b03b ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Namespace="kube-system" Pod="coredns-668d6bf9bc-5ldbp" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:25.774758 containerd[1726]: 2026-01-28 01:26:25.752 [INFO][5251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Namespace="kube-system" Pod="coredns-668d6bf9bc-5ldbp" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:25.774758 containerd[1726]: 2026-01-28 01:26:25.753 [INFO][5251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Namespace="kube-system" Pod="coredns-668d6bf9bc-5ldbp" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"36f60f1b-edc5-4d4b-8496-6ed810707a8c", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976", Pod:"coredns-668d6bf9bc-5ldbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c9379b03b", MAC:"4e:06:d3:1f:b6:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:25.774758 containerd[1726]: 2026-01-28 01:26:25.771 [INFO][5251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976" Namespace="kube-system" Pod="coredns-668d6bf9bc-5ldbp" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:26:26.009966 systemd-networkd[1361]: calic5416b12b48: Link UP Jan 28 01:26:26.010181 systemd-networkd[1361]: calic5416b12b48: Gained carrier Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.919 [INFO][5282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0 goldmane-666569f655- calico-system 50b57260-757d-49ab-b412-157457a311f9 996 0 2026-01-28 01:25:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-20d4350ff0 goldmane-666569f655-mm8vq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic5416b12b48 [] [] }} ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Namespace="calico-system" Pod="goldmane-666569f655-mm8vq" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.920 [INFO][5282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Namespace="calico-system" Pod="goldmane-666569f655-mm8vq" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.944 [INFO][5295] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" HandleID="k8s-pod-network.244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.944 [INFO][5295] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" HandleID="k8s-pod-network.244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-20d4350ff0", "pod":"goldmane-666569f655-mm8vq", "timestamp":"2026-01-28 01:26:25.944397594 +0000 UTC"}, Hostname:"ci-4081.3.6-n-20d4350ff0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.945 [INFO][5295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.945 [INFO][5295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.945 [INFO][5295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-20d4350ff0' Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.958 [INFO][5295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.963 [INFO][5295] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.967 [INFO][5295] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.971 [INFO][5295] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.973 [INFO][5295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.975 [INFO][5295] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.977 [INFO][5295] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866 Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.989 [INFO][5295] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.999 [INFO][5295] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.69/26] block=192.168.9.64/26 handle="k8s-pod-network.244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.999 [INFO][5295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.69/26] handle="k8s-pod-network.244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.999 [INFO][5295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:26.035605 containerd[1726]: 2026-01-28 01:26:25.999 [INFO][5295] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.69/26] IPv6=[] ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" HandleID="k8s-pod-network.244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:26.036141 containerd[1726]: 2026-01-28 01:26:26.003 [INFO][5282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Namespace="calico-system" Pod="goldmane-666569f655-mm8vq" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50b57260-757d-49ab-b412-157457a311f9", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"", Pod:"goldmane-666569f655-mm8vq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic5416b12b48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:26.036141 containerd[1726]: 2026-01-28 01:26:26.003 [INFO][5282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.69/32] ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Namespace="calico-system" Pod="goldmane-666569f655-mm8vq" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:26.036141 containerd[1726]: 2026-01-28 01:26:26.003 [INFO][5282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5416b12b48 ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Namespace="calico-system" Pod="goldmane-666569f655-mm8vq" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:26.036141 containerd[1726]: 2026-01-28 01:26:26.009 [INFO][5282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Namespace="calico-system" Pod="goldmane-666569f655-mm8vq" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:26.036141 containerd[1726]: 2026-01-28 01:26:26.010 [INFO][5282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Namespace="calico-system" Pod="goldmane-666569f655-mm8vq" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50b57260-757d-49ab-b412-157457a311f9", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866", Pod:"goldmane-666569f655-mm8vq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic5416b12b48", MAC:"62:db:bc:ab:77:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:26.036141 containerd[1726]: 2026-01-28 01:26:26.028 [INFO][5282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866" Namespace="calico-system" Pod="goldmane-666569f655-mm8vq" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:26:26.122074 containerd[1726]: time="2026-01-28T01:26:26.120285828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:26.122074 containerd[1726]: time="2026-01-28T01:26:26.120337868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:26.122429 containerd[1726]: time="2026-01-28T01:26:26.121085108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.122429 containerd[1726]: time="2026-01-28T01:26:26.121976827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.147629 systemd[1]: Started cri-containerd-3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976.scope - libcontainer container 3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976. Jan 28 01:26:26.172881 containerd[1726]: time="2026-01-28T01:26:26.172674231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:26.173030 containerd[1726]: time="2026-01-28T01:26:26.172820591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:26.173030 containerd[1726]: time="2026-01-28T01:26:26.172845351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.174231 containerd[1726]: time="2026-01-28T01:26:26.174020550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.184865 systemd-networkd[1361]: cali7b30a7046ed: Link UP Jan 28 01:26:26.187428 systemd-networkd[1361]: cali7b30a7046ed: Gained carrier Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:25.998 [INFO][5300] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0 calico-kube-controllers-f6944bcdb- calico-system efad0924-58e2-470d-a190-d57cd8685e98 997 0 2026-01-28 01:25:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f6944bcdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-20d4350ff0 calico-kube-controllers-f6944bcdb-mk9w8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7b30a7046ed [] [] }} ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Namespace="calico-system" Pod="calico-kube-controllers-f6944bcdb-mk9w8" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:25.999 [INFO][5300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Namespace="calico-system" Pod="calico-kube-controllers-f6944bcdb-mk9w8" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.072 [INFO][5328] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" HandleID="k8s-pod-network.9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.095 [INFO][5328] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" HandleID="k8s-pod-network.9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-20d4350ff0", "pod":"calico-kube-controllers-f6944bcdb-mk9w8", "timestamp":"2026-01-28 01:26:26.072657542 +0000 UTC"}, Hostname:"ci-4081.3.6-n-20d4350ff0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.096 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.096 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.096 [INFO][5328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-20d4350ff0' Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.112 [INFO][5328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.122 [INFO][5328] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.135 [INFO][5328] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.139 [INFO][5328] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.144 [INFO][5328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.144 [INFO][5328] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.147 [INFO][5328] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.159 [INFO][5328] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.174 [INFO][5328] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.70/26] block=192.168.9.64/26 handle="k8s-pod-network.9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.174 [INFO][5328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.70/26] handle="k8s-pod-network.9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.174 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:26.228928 containerd[1726]: 2026-01-28 01:26:26.175 [INFO][5328] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.70/26] IPv6=[] ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" HandleID="k8s-pod-network.9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:26.233022 containerd[1726]: 2026-01-28 01:26:26.179 [INFO][5300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Namespace="calico-system" Pod="calico-kube-controllers-f6944bcdb-mk9w8" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0", GenerateName:"calico-kube-controllers-f6944bcdb-", Namespace:"calico-system", SelfLink:"", UID:"efad0924-58e2-470d-a190-d57cd8685e98", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6944bcdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"", Pod:"calico-kube-controllers-f6944bcdb-mk9w8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b30a7046ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:26.233022 containerd[1726]: 2026-01-28 01:26:26.179 [INFO][5300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.70/32] ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Namespace="calico-system" Pod="calico-kube-controllers-f6944bcdb-mk9w8" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:26.233022 containerd[1726]: 2026-01-28 01:26:26.179 [INFO][5300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b30a7046ed ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Namespace="calico-system" Pod="calico-kube-controllers-f6944bcdb-mk9w8" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:26.233022 containerd[1726]: 2026-01-28 01:26:26.188 [INFO][5300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Namespace="calico-system" Pod="calico-kube-controllers-f6944bcdb-mk9w8" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:26.233022 containerd[1726]: 2026-01-28 01:26:26.191 [INFO][5300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Namespace="calico-system" Pod="calico-kube-controllers-f6944bcdb-mk9w8" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0", GenerateName:"calico-kube-controllers-f6944bcdb-", Namespace:"calico-system", SelfLink:"", UID:"efad0924-58e2-470d-a190-d57cd8685e98", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6944bcdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d", Pod:"calico-kube-controllers-f6944bcdb-mk9w8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b30a7046ed", MAC:"5a:ec:36:df:36:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:26.233022 containerd[1726]: 2026-01-28 01:26:26.217 [INFO][5300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d" Namespace="calico-system" Pod="calico-kube-controllers-f6944bcdb-mk9w8" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:26:26.230650 systemd[1]: Started cri-containerd-244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866.scope - libcontainer container 244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866. Jan 28 01:26:26.263225 containerd[1726]: time="2026-01-28T01:26:26.262292207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5ldbp,Uid:36f60f1b-edc5-4d4b-8496-6ed810707a8c,Namespace:kube-system,Attempt:1,} returns sandbox id \"3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976\"" Jan 28 01:26:26.296742 containerd[1726]: time="2026-01-28T01:26:26.296694302Z" level=info msg="CreateContainer within sandbox \"3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:26:26.309344 containerd[1726]: time="2026-01-28T01:26:26.308575333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:26.309344 containerd[1726]: time="2026-01-28T01:26:26.308647973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:26.309344 containerd[1726]: time="2026-01-28T01:26:26.308663813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.309344 containerd[1726]: time="2026-01-28T01:26:26.308763973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.355847 systemd[1]: Started cri-containerd-9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d.scope - libcontainer container 9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d. Jan 28 01:26:26.371269 containerd[1726]: time="2026-01-28T01:26:26.371235009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mm8vq,Uid:50b57260-757d-49ab-b412-157457a311f9,Namespace:calico-system,Attempt:1,} returns sandbox id \"244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866\"" Jan 28 01:26:26.374583 systemd-networkd[1361]: cali59ed2d654ee: Link UP Jan 28 01:26:26.376523 containerd[1726]: time="2026-01-28T01:26:26.375076326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:26:26.376933 systemd-networkd[1361]: cali59ed2d654ee: Gained carrier Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.076 [INFO][5316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0 coredns-668d6bf9bc- kube-system e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6 999 0 2026-01-28 01:25:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-20d4350ff0 coredns-668d6bf9bc-fv4g9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali59ed2d654ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fv4g9" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.096 [INFO][5316] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fv4g9" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.165 [INFO][5358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" HandleID="k8s-pod-network.18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.165 [INFO][5358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" HandleID="k8s-pod-network.18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c10f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-20d4350ff0", "pod":"coredns-668d6bf9bc-fv4g9", "timestamp":"2026-01-28 01:26:26.165714716 +0000 UTC"}, Hostname:"ci-4081.3.6-n-20d4350ff0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.165 [INFO][5358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.177 [INFO][5358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.177 [INFO][5358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-20d4350ff0' Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.221 [INFO][5358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.245 [INFO][5358] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.256 [INFO][5358] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.266 [INFO][5358] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.279 [INFO][5358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.281 [INFO][5358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.286 [INFO][5358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9 Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.327 [INFO][5358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.347 [INFO][5358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.71/26] block=192.168.9.64/26 handle="k8s-pod-network.18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.347 [INFO][5358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.71/26] handle="k8s-pod-network.18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.348 [INFO][5358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:26.409381 containerd[1726]: 2026-01-28 01:26:26.348 [INFO][5358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.71/26] IPv6=[] ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" HandleID="k8s-pod-network.18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:26.411275 containerd[1726]: 2026-01-28 01:26:26.360 [INFO][5316] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fv4g9" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"", Pod:"coredns-668d6bf9bc-fv4g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59ed2d654ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:26.411275 containerd[1726]: 2026-01-28 01:26:26.360 [INFO][5316] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.71/32] ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fv4g9" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:26.411275 containerd[1726]: 2026-01-28 01:26:26.360 [INFO][5316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59ed2d654ee ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fv4g9" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:26.411275 containerd[1726]: 2026-01-28 01:26:26.376 [INFO][5316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fv4g9" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:26.411275 containerd[1726]: 2026-01-28 01:26:26.378 [INFO][5316] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fv4g9" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9", Pod:"coredns-668d6bf9bc-fv4g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59ed2d654ee", MAC:"de:c0:df:e0:96:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:26.411275 containerd[1726]: 2026-01-28 01:26:26.404 [INFO][5316] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fv4g9" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:26:26.421334 containerd[1726]: time="2026-01-28T01:26:26.421292493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6944bcdb-mk9w8,Uid:efad0924-58e2-470d-a190-d57cd8685e98,Namespace:calico-system,Attempt:1,} returns sandbox id \"9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d\"" Jan 28 01:26:26.489032 systemd-networkd[1361]: calie4361da1fae: Link UP Jan 28 01:26:26.490636 systemd-networkd[1361]: calie4361da1fae: Gained carrier Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.370 [INFO][5413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0 csi-node-driver- calico-system b0ef4dca-fc9b-48e6-a83b-e247508a0b04 1014 0 2026-01-28 01:25:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-20d4350ff0 csi-node-driver-kwqqh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie4361da1fae [] [] }} ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Namespace="calico-system" Pod="csi-node-driver-kwqqh" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.370 [INFO][5413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Namespace="calico-system" Pod="csi-node-driver-kwqqh" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.435 [INFO][5495] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" HandleID="k8s-pod-network.fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.436 [INFO][5495] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" HandleID="k8s-pod-network.fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-20d4350ff0", "pod":"csi-node-driver-kwqqh", "timestamp":"2026-01-28 01:26:26.435899682 +0000 UTC"}, Hostname:"ci-4081.3.6-n-20d4350ff0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.436 [INFO][5495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.436 [INFO][5495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.436 [INFO][5495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-20d4350ff0' Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.451 [INFO][5495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.455 [INFO][5495] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.458 [INFO][5495] ipam/ipam.go 511: Trying affinity for 192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.462 [INFO][5495] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.465 [INFO][5495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.465 [INFO][5495] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.467 [INFO][5495] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5 Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.472 [INFO][5495] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.482 [INFO][5495] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.72/26] block=192.168.9.64/26 handle="k8s-pod-network.fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.483 [INFO][5495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.72/26] handle="k8s-pod-network.fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" host="ci-4081.3.6-n-20d4350ff0" Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.483 [INFO][5495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:26:26.526655 containerd[1726]: 2026-01-28 01:26:26.483 [INFO][5495] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.72/26] IPv6=[] ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" HandleID="k8s-pod-network.fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:26.527292 containerd[1726]: 2026-01-28 01:26:26.486 [INFO][5413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Namespace="calico-system" Pod="csi-node-driver-kwqqh" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0ef4dca-fc9b-48e6-a83b-e247508a0b04", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"", Pod:"csi-node-driver-kwqqh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4361da1fae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:26.527292 containerd[1726]: 2026-01-28 01:26:26.486 [INFO][5413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.72/32] ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Namespace="calico-system" Pod="csi-node-driver-kwqqh" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:26.527292 containerd[1726]: 2026-01-28 01:26:26.486 [INFO][5413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4361da1fae ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Namespace="calico-system" Pod="csi-node-driver-kwqqh" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:26.527292 containerd[1726]: 2026-01-28 01:26:26.491 [INFO][5413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Namespace="calico-system" Pod="csi-node-driver-kwqqh" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:26.527292 containerd[1726]: 2026-01-28 01:26:26.492 [INFO][5413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Namespace="calico-system" Pod="csi-node-driver-kwqqh" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0ef4dca-fc9b-48e6-a83b-e247508a0b04", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5", Pod:"csi-node-driver-kwqqh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4361da1fae", MAC:"be:57:aa:b6:89:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:26:26.527292 containerd[1726]: 2026-01-28 01:26:26.518 [INFO][5413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5" Namespace="calico-system" Pod="csi-node-driver-kwqqh" WorkloadEndpoint="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:26:26.534513 containerd[1726]: time="2026-01-28T01:26:26.534253972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:26.534513 containerd[1726]: time="2026-01-28T01:26:26.534302132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:26.534513 containerd[1726]: time="2026-01-28T01:26:26.534322092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.534513 containerd[1726]: time="2026-01-28T01:26:26.534398092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.560067 systemd[1]: Started cri-containerd-18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9.scope - libcontainer container 18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9. Jan 28 01:26:26.590190 containerd[1726]: time="2026-01-28T01:26:26.590143452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fv4g9,Uid:e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6,Namespace:kube-system,Attempt:1,} returns sandbox id \"18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9\"" Jan 28 01:26:26.601927 containerd[1726]: time="2026-01-28T01:26:26.601874804Z" level=info msg="CreateContainer within sandbox \"18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:26:26.668242 containerd[1726]: time="2026-01-28T01:26:26.667924356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:26.668242 containerd[1726]: time="2026-01-28T01:26:26.667973436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:26.668242 containerd[1726]: time="2026-01-28T01:26:26.667983636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.668242 containerd[1726]: time="2026-01-28T01:26:26.668058236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:26.685632 systemd[1]: Started cri-containerd-fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5.scope - libcontainer container fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5. Jan 28 01:26:26.710112 containerd[1726]: time="2026-01-28T01:26:26.710079886Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:26.715258 containerd[1726]: time="2026-01-28T01:26:26.715074443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwqqh,Uid:b0ef4dca-fc9b-48e6-a83b-e247508a0b04,Namespace:calico-system,Attempt:1,} returns sandbox id \"fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5\"" Jan 28 01:26:26.904075 containerd[1726]: time="2026-01-28T01:26:26.903900867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:26:26.906214 containerd[1726]: time="2026-01-28T01:26:26.904193507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:26:26.906214 containerd[1726]: time="2026-01-28T01:26:26.904819947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:26:26.906290 kubelet[3198]: E0128 01:26:26.904309 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:26.906290 kubelet[3198]: E0128 01:26:26.904361 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:26.906290 kubelet[3198]: E0128 01:26:26.904659 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bd44v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mm8vq_calico-system(50b57260-757d-49ab-b412-157457a311f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:26.906759 kubelet[3198]: E0128 01:26:26.906665 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:26:27.001873 containerd[1726]: time="2026-01-28T01:26:27.001665437Z" level=info msg="CreateContainer within sandbox \"3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0e89c81961766f70a0b0b4fe41283955976e9de8b3d5e57c2ed4e4bda9fbf95\"" Jan 28 01:26:27.006679 containerd[1726]: time="2026-01-28T01:26:27.006644514Z" level=info msg="StartContainer for \"d0e89c81961766f70a0b0b4fe41283955976e9de8b3d5e57c2ed4e4bda9fbf95\"" Jan 28 01:26:27.030677 systemd[1]: Started cri-containerd-d0e89c81961766f70a0b0b4fe41283955976e9de8b3d5e57c2ed4e4bda9fbf95.scope - libcontainer container d0e89c81961766f70a0b0b4fe41283955976e9de8b3d5e57c2ed4e4bda9fbf95. Jan 28 01:26:27.096786 containerd[1726]: time="2026-01-28T01:26:27.096749249Z" level=info msg="StartContainer for \"d0e89c81961766f70a0b0b4fe41283955976e9de8b3d5e57c2ed4e4bda9fbf95\" returns successfully" Jan 28 01:26:27.154018 containerd[1726]: time="2026-01-28T01:26:27.153977608Z" level=info msg="CreateContainer within sandbox \"18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40bed69240254750389fd3912dc9e73f92e5e4d8d060cd3447f898595f0356fa\"" Jan 28 01:26:27.155731 containerd[1726]: time="2026-01-28T01:26:27.155155688Z" level=info msg="StartContainer for \"40bed69240254750389fd3912dc9e73f92e5e4d8d060cd3447f898595f0356fa\"" Jan 28 01:26:27.178618 systemd[1]: Started cri-containerd-40bed69240254750389fd3912dc9e73f92e5e4d8d060cd3447f898595f0356fa.scope - libcontainer container 40bed69240254750389fd3912dc9e73f92e5e4d8d060cd3447f898595f0356fa. Jan 28 01:26:27.207421 containerd[1726]: time="2026-01-28T01:26:27.207378810Z" level=info msg="StartContainer for \"40bed69240254750389fd3912dc9e73f92e5e4d8d060cd3447f898595f0356fa\" returns successfully" Jan 28 01:26:27.274005 containerd[1726]: time="2026-01-28T01:26:27.273965283Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:27.278958 containerd[1726]: time="2026-01-28T01:26:27.277600560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:26:27.278958 containerd[1726]: time="2026-01-28T01:26:27.277670680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:26:27.278958 containerd[1726]: time="2026-01-28T01:26:27.278775999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:26:27.279203 kubelet[3198]: E0128 01:26:27.277812 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:27.279203 kubelet[3198]: E0128 01:26:27.277853 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:27.279203 kubelet[3198]: E0128 01:26:27.278052 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc8mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f6944bcdb-mk9w8_calico-system(efad0924-58e2-470d-a190-d57cd8685e98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:27.279203 kubelet[3198]: E0128 01:26:27.279125 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:26:27.349649 systemd-networkd[1361]: calib4c9379b03b: Gained IPv6LL Jan 28 01:26:27.542283 containerd[1726]: time="2026-01-28T01:26:27.542030171Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:27.544687 containerd[1726]: time="2026-01-28T01:26:27.544591849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:26:27.544687 containerd[1726]: time="2026-01-28T01:26:27.544656489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:26:27.544870 kubelet[3198]: E0128 01:26:27.544832 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:27.544926 kubelet[3198]: E0128 01:26:27.544877 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:27.547187 kubelet[3198]: E0128 01:26:27.547134 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqzmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:27.549336 containerd[1726]: time="2026-01-28T01:26:27.549165606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:26:27.609338 kubelet[3198]: E0128 01:26:27.609304 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:26:27.609837 kubelet[3198]: E0128 01:26:27.609785 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:26:27.642722 kubelet[3198]: I0128 01:26:27.641658 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fv4g9" podStartSLOduration=65.641641139 podStartE2EDuration="1m5.641641139s" podCreationTimestamp="2026-01-28 01:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:26:27.623210233 +0000 UTC m=+70.786351375" watchObservedRunningTime="2026-01-28 01:26:27.641641139 +0000 UTC m=+70.804782281" Jan 28 01:26:27.694322 kubelet[3198]: I0128 01:26:27.694245 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5ldbp" podStartSLOduration=65.694227422 podStartE2EDuration="1m5.694227422s" podCreationTimestamp="2026-01-28 01:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:26:27.693331062 +0000 UTC m=+70.856472204" watchObservedRunningTime="2026-01-28 01:26:27.694227422 +0000 UTC m=+70.857368564" Jan 28 01:26:27.797662 systemd-networkd[1361]: cali7b30a7046ed: Gained IPv6LL Jan 28 01:26:27.816191 containerd[1726]: time="2026-01-28T01:26:27.816145855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:27.818944 containerd[1726]: time="2026-01-28T01:26:27.818891773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:26:27.819061 containerd[1726]: time="2026-01-28T01:26:27.818907373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:26:27.819161 kubelet[3198]: E0128 01:26:27.819124 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:27.819205 kubelet[3198]: E0128 01:26:27.819172 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:27.819339 kubelet[3198]: E0128 01:26:27.819297 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqzmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:27.820722 kubelet[3198]: E0128 01:26:27.820676 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:26:27.925659 systemd-networkd[1361]: calic5416b12b48: Gained IPv6LL Jan 28 01:26:28.245655 systemd-networkd[1361]: calie4361da1fae: Gained IPv6LL Jan 28 01:26:28.309638 systemd-networkd[1361]: cali59ed2d654ee: Gained IPv6LL Jan 28 01:26:28.611349 kubelet[3198]: E0128 01:26:28.611259 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:26:28.944819 containerd[1726]: time="2026-01-28T01:26:28.944530087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:26:29.229130 containerd[1726]: time="2026-01-28T01:26:29.229019203Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:29.233303 containerd[1726]: time="2026-01-28T01:26:29.233261120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:26:29.233488 containerd[1726]: time="2026-01-28T01:26:29.233444280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:26:29.234204 kubelet[3198]: E0128 01:26:29.233728 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:29.234204 kubelet[3198]: E0128 01:26:29.233773 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:29.234204 kubelet[3198]: E0128 01:26:29.233868 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:41cd963ee3c14a94bb038663169e4951,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8l2wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-766b799ccb-m5599_calico-system(7986b68d-2b69-4fd3-a1ac-2bbd1d928663): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:29.236858 containerd[1726]: time="2026-01-28T01:26:29.235990518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:26:29.502748 containerd[1726]: time="2026-01-28T01:26:29.502599168Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:29.505603 containerd[1726]: time="2026-01-28T01:26:29.505559685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:26:29.505711 containerd[1726]: time="2026-01-28T01:26:29.505659885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:26:29.505860 kubelet[3198]: E0128 01:26:29.505801 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:29.505915 kubelet[3198]: E0128 01:26:29.505878 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:29.506317 kubelet[3198]: E0128 01:26:29.505986 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l2wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-766b799ccb-m5599_calico-system(7986b68d-2b69-4fd3-a1ac-2bbd1d928663): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:29.507130 kubelet[3198]: E0128 01:26:29.507095 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:26:38.946873 containerd[1726]: time="2026-01-28T01:26:38.946621867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:39.246254 containerd[1726]: time="2026-01-28T01:26:39.246141444Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:39.251213 containerd[1726]: time="2026-01-28T01:26:39.251030160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:39.251213 containerd[1726]: time="2026-01-28T01:26:39.251100080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:26:39.251336 kubelet[3198]: E0128 01:26:39.251240 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:39.251336 kubelet[3198]: E0128 01:26:39.251290 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:39.251687 kubelet[3198]: E0128 01:26:39.251415 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmhzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84868d5f79-45qm5_calico-apiserver(79191197-2837-43fa-b284-2023c360b9e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:39.252541 kubelet[3198]: E0128 01:26:39.252507 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:26:39.944600 containerd[1726]: time="2026-01-28T01:26:39.944557725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:26:40.200889 containerd[1726]: time="2026-01-28T01:26:40.200774295Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:40.204975 containerd[1726]: time="2026-01-28T01:26:40.204875052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:26:40.204975 containerd[1726]: time="2026-01-28T01:26:40.204943892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:26:40.205218 kubelet[3198]: E0128 01:26:40.205175 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:40.205928 kubelet[3198]: E0128 01:26:40.205227 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:40.205928 kubelet[3198]: E0128 01:26:40.205492 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc8mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f6944bcdb-mk9w8_calico-system(efad0924-58e2-470d-a190-d57cd8685e98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:40.206100 containerd[1726]: time="2026-01-28T01:26:40.205822091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:40.207448 kubelet[3198]: E0128 01:26:40.207419 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:26:40.442972 containerd[1726]: time="2026-01-28T01:26:40.442927470Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:40.446415 containerd[1726]: time="2026-01-28T01:26:40.446376307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:40.446516 containerd[1726]: time="2026-01-28T01:26:40.446485107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:26:40.446662 kubelet[3198]: E0128 01:26:40.446624 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:40.446912 kubelet[3198]: E0128 01:26:40.446671 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:40.446912 kubelet[3198]: E0128 01:26:40.446780 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhj9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84868d5f79-sv4mj_calico-apiserver(19367d42-7907-4f04-8c63-bcae87fa9f82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:40.448214 kubelet[3198]: E0128 01:26:40.448167 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:26:40.945055 containerd[1726]: time="2026-01-28T01:26:40.945001834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:26:41.202284 containerd[1726]: time="2026-01-28T01:26:41.202102951Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:41.207932 containerd[1726]: time="2026-01-28T01:26:41.207839706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:26:41.207932 containerd[1726]: time="2026-01-28T01:26:41.207900426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:26:41.208086 kubelet[3198]: E0128 01:26:41.208059 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:41.208128 kubelet[3198]: E0128 01:26:41.208102 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:41.208253 kubelet[3198]: E0128 01:26:41.208212 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqzmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:41.211148 containerd[1726]: time="2026-01-28T01:26:41.211117384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:26:41.471107 containerd[1726]: time="2026-01-28T01:26:41.470677459Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:41.473199 containerd[1726]: time="2026-01-28T01:26:41.473142097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:26:41.473309 containerd[1726]: time="2026-01-28T01:26:41.473270097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:26:41.473655 kubelet[3198]: E0128 01:26:41.473437 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:41.473655 kubelet[3198]: E0128 01:26:41.473495 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:41.473655 kubelet[3198]: E0128 01:26:41.473607 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqzmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:41.474933 kubelet[3198]: E0128 01:26:41.474894 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:26:41.945533 kubelet[3198]: E0128 01:26:41.945367 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:26:42.955942 containerd[1726]: time="2026-01-28T01:26:42.955900728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:26:43.215956 containerd[1726]: time="2026-01-28T01:26:43.215833683Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:43.220891 containerd[1726]: time="2026-01-28T01:26:43.220790319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:26:43.220891 containerd[1726]: time="2026-01-28T01:26:43.220858799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:26:43.221076 kubelet[3198]: E0128 01:26:43.221003 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:43.221076 kubelet[3198]: E0128 01:26:43.221045 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:43.226989 kubelet[3198]: E0128 01:26:43.226919 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bd44v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mm8vq_calico-system(50b57260-757d-49ab-b412-157457a311f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:43.228200 kubelet[3198]: E0128 01:26:43.228154 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:26:51.944632 kubelet[3198]: E0128 01:26:51.944129 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:26:53.945737 kubelet[3198]: E0128 01:26:53.945688 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:26:53.946516 containerd[1726]: time="2026-01-28T01:26:53.946484443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:26:54.225819 containerd[1726]: time="2026-01-28T01:26:54.225316634Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:54.228240 containerd[1726]: time="2026-01-28T01:26:54.228167232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:26:54.228240 containerd[1726]: time="2026-01-28T01:26:54.228212072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:26:54.229103 kubelet[3198]: E0128 01:26:54.228664 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:54.229103 kubelet[3198]: E0128 01:26:54.228712 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:54.229593 kubelet[3198]: E0128 01:26:54.229247 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:41cd963ee3c14a94bb038663169e4951,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8l2wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-766b799ccb-m5599_calico-system(7986b68d-2b69-4fd3-a1ac-2bbd1d928663): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:54.231077 containerd[1726]: time="2026-01-28T01:26:54.231045750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:26:54.490662 containerd[1726]: time="2026-01-28T01:26:54.490536475Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:26:54.493154 containerd[1726]: time="2026-01-28T01:26:54.493101433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:26:54.493364 containerd[1726]: time="2026-01-28T01:26:54.493200713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:26:54.493622 kubelet[3198]: E0128 01:26:54.493479 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:54.493622 kubelet[3198]: E0128 01:26:54.493528 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:54.493937 kubelet[3198]: E0128 01:26:54.493752 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l2wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-766b799ccb-m5599_calico-system(7986b68d-2b69-4fd3-a1ac-2bbd1d928663): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:54.495046 kubelet[3198]: E0128 01:26:54.494983 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:26:54.946568 kubelet[3198]: E0128 01:26:54.945671 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:26:55.944101 kubelet[3198]: E0128 01:26:55.943713 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:26:55.944101 kubelet[3198]: E0128 01:26:55.944002 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:27:05.945236 kubelet[3198]: E0128 01:27:05.945112 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:27:06.944825 containerd[1726]: time="2026-01-28T01:27:06.944598566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:07.224662 containerd[1726]: time="2026-01-28T01:27:07.224389785Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:07.226867 containerd[1726]: time="2026-01-28T01:27:07.226754383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:07.226867 containerd[1726]: time="2026-01-28T01:27:07.226793903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:07.227024 kubelet[3198]: E0128 01:27:07.226983 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:07.227327 kubelet[3198]: E0128 01:27:07.227034 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:07.227327 kubelet[3198]: E0128 01:27:07.227235 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmhzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84868d5f79-45qm5_calico-apiserver(79191197-2837-43fa-b284-2023c360b9e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:07.227977 containerd[1726]: time="2026-01-28T01:27:07.227753703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:27:07.229201 kubelet[3198]: E0128 01:27:07.229159 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:27:07.511017 containerd[1726]: time="2026-01-28T01:27:07.510286359Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:07.515086 containerd[1726]: time="2026-01-28T01:27:07.514922636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:27:07.515086 containerd[1726]: time="2026-01-28T01:27:07.514991396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:27:07.515583 kubelet[3198]: E0128 01:27:07.515387 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:07.515583 kubelet[3198]: E0128 01:27:07.515435 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:07.515844 kubelet[3198]: E0128 01:27:07.515652 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc8mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f6944bcdb-mk9w8_calico-system(efad0924-58e2-470d-a190-d57cd8685e98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:07.516558 containerd[1726]: time="2026-01-28T01:27:07.516253435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:27:07.516858 kubelet[3198]: E0128 01:27:07.516807 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:27:07.783828 containerd[1726]: time="2026-01-28T01:27:07.783715303Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:07.787135 containerd[1726]: time="2026-01-28T01:27:07.787064261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:27:07.787135 containerd[1726]: time="2026-01-28T01:27:07.787108861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:27:07.787466 kubelet[3198]: E0128 01:27:07.787384 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:27:07.787466 kubelet[3198]: E0128 01:27:07.787441 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:27:07.787942 kubelet[3198]: E0128 01:27:07.787658 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqzmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:07.789749 containerd[1726]: time="2026-01-28T01:27:07.789716099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:27:08.185220 containerd[1726]: time="2026-01-28T01:27:08.185174906Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:08.187972 containerd[1726]: time="2026-01-28T01:27:08.187939024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:27:08.187972 containerd[1726]: time="2026-01-28T01:27:08.188009904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:27:08.188154 kubelet[3198]: E0128 01:27:08.188116 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:27:08.188207 kubelet[3198]: E0128 01:27:08.188162 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:27:08.188684 kubelet[3198]: E0128 01:27:08.188447 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqzmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:08.188900 containerd[1726]: time="2026-01-28T01:27:08.188759303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:08.189827 kubelet[3198]: E0128 01:27:08.189764 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:27:08.492801 containerd[1726]: time="2026-01-28T01:27:08.492640023Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:08.495104 containerd[1726]: time="2026-01-28T01:27:08.495005581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:08.495104 containerd[1726]: time="2026-01-28T01:27:08.495078221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:08.495319 kubelet[3198]: E0128 01:27:08.495198 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:08.495319 kubelet[3198]: E0128 01:27:08.495241 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:08.495633 kubelet[3198]: E0128 01:27:08.495357 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhj9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84868d5f79-sv4mj_calico-apiserver(19367d42-7907-4f04-8c63-bcae87fa9f82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:08.496710 kubelet[3198]: E0128 01:27:08.496620 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:27:08.946522 containerd[1726]: time="2026-01-28T01:27:08.946149545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:27:09.235562 containerd[1726]: time="2026-01-28T01:27:09.234140118Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:09.237034 containerd[1726]: time="2026-01-28T01:27:09.236979875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:27:09.237134 containerd[1726]: time="2026-01-28T01:27:09.237093835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:09.237323 kubelet[3198]: E0128 01:27:09.237284 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:09.237389 kubelet[3198]: E0128 01:27:09.237336 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:09.237822 kubelet[3198]: E0128 01:27:09.237498 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bd44v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mm8vq_calico-system(50b57260-757d-49ab-b412-157457a311f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:09.239106 kubelet[3198]: E0128 01:27:09.239074 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:27:17.151367 containerd[1726]: time="2026-01-28T01:27:17.150481213Z" level=info msg="StopPodSandbox for \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\"" Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.203 [WARNING][5766] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0", GenerateName:"calico-kube-controllers-f6944bcdb-", Namespace:"calico-system", SelfLink:"", UID:"efad0924-58e2-470d-a190-d57cd8685e98", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6944bcdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d", Pod:"calico-kube-controllers-f6944bcdb-mk9w8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b30a7046ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.204 [INFO][5766] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.204 [INFO][5766] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" iface="eth0" netns="" Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.204 [INFO][5766] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.204 [INFO][5766] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.230 [INFO][5773] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" HandleID="k8s-pod-network.3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.231 [INFO][5773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.232 [INFO][5773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.241 [WARNING][5773] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" HandleID="k8s-pod-network.3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.241 [INFO][5773] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" HandleID="k8s-pod-network.3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.242 [INFO][5773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:17.250260 containerd[1726]: 2026-01-28 01:27:17.246 [INFO][5766] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:27:17.250260 containerd[1726]: time="2026-01-28T01:27:17.249822694Z" level=info msg="TearDown network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\" successfully" Jan 28 01:27:17.250260 containerd[1726]: time="2026-01-28T01:27:17.249854254Z" level=info msg="StopPodSandbox for \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\" returns successfully" Jan 28 01:27:17.251047 containerd[1726]: time="2026-01-28T01:27:17.250647454Z" level=info msg="RemovePodSandbox for \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\"" Jan 28 01:27:17.251987 containerd[1726]: time="2026-01-28T01:27:17.251276173Z" level=info msg="Forcibly stopping sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\"" Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.312 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0", GenerateName:"calico-kube-controllers-f6944bcdb-", Namespace:"calico-system", SelfLink:"", UID:"efad0924-58e2-470d-a190-d57cd8685e98", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6944bcdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"9f5aee555a7fe7ad119b5c01c7080b8bf838891c169f990c463b3e917293d35d", Pod:"calico-kube-controllers-f6944bcdb-mk9w8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b30a7046ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.313 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.313 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" iface="eth0" netns="" Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.313 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.313 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.339 [INFO][5795] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" HandleID="k8s-pod-network.3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.339 [INFO][5795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.339 [INFO][5795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.348 [WARNING][5795] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" HandleID="k8s-pod-network.3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.348 [INFO][5795] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" HandleID="k8s-pod-network.3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--kube--controllers--f6944bcdb--mk9w8-eth0" Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.349 [INFO][5795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:17.354657 containerd[1726]: 2026-01-28 01:27:17.352 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185" Jan 28 01:27:17.357536 containerd[1726]: time="2026-01-28T01:27:17.355123091Z" level=info msg="TearDown network for sandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\" successfully" Jan 28 01:27:17.364645 containerd[1726]: time="2026-01-28T01:27:17.364499363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:17.364645 containerd[1726]: time="2026-01-28T01:27:17.364561843Z" level=info msg="RemovePodSandbox \"3068b391f973622a86f1b8c4ba950ba831f0132e373e82ffebe218721244d185\" returns successfully" Jan 28 01:27:17.365178 containerd[1726]: time="2026-01-28T01:27:17.365153603Z" level=info msg="StopPodSandbox for \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\"" Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.415 [WARNING][5809] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"36f60f1b-edc5-4d4b-8496-6ed810707a8c", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976", Pod:"coredns-668d6bf9bc-5ldbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c9379b03b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.415 [INFO][5809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.415 [INFO][5809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" iface="eth0" netns="" Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.416 [INFO][5809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.416 [INFO][5809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.438 [INFO][5817] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" HandleID="k8s-pod-network.c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.439 [INFO][5817] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.439 [INFO][5817] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.447 [WARNING][5817] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" HandleID="k8s-pod-network.c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.447 [INFO][5817] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" HandleID="k8s-pod-network.c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.448 [INFO][5817] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:17.453454 containerd[1726]: 2026-01-28 01:27:17.451 [INFO][5809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:27:17.454663 containerd[1726]: time="2026-01-28T01:27:17.453441013Z" level=info msg="TearDown network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\" successfully" Jan 28 01:27:17.454663 containerd[1726]: time="2026-01-28T01:27:17.454543532Z" level=info msg="StopPodSandbox for \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\" returns successfully" Jan 28 01:27:17.455469 containerd[1726]: time="2026-01-28T01:27:17.455079292Z" level=info msg="RemovePodSandbox for \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\"" Jan 28 01:27:17.455469 containerd[1726]: time="2026-01-28T01:27:17.455107932Z" level=info msg="Forcibly stopping sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\"" Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.491 [WARNING][5832] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"36f60f1b-edc5-4d4b-8496-6ed810707a8c", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"3861d34654a42ed07c3e49c899225628f626508f472d089f220129ef7f6a9976", Pod:"coredns-668d6bf9bc-5ldbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c9379b03b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.492 [INFO][5832] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.492 [INFO][5832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" iface="eth0" netns="" Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.492 [INFO][5832] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.492 [INFO][5832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.513 [INFO][5840] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" HandleID="k8s-pod-network.c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.513 [INFO][5840] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.513 [INFO][5840] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.523 [WARNING][5840] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" HandleID="k8s-pod-network.c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.523 [INFO][5840] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" HandleID="k8s-pod-network.c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--5ldbp-eth0" Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.524 [INFO][5840] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:17.529153 containerd[1726]: 2026-01-28 01:27:17.525 [INFO][5832] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce" Jan 28 01:27:17.530694 containerd[1726]: time="2026-01-28T01:27:17.529544753Z" level=info msg="TearDown network for sandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\" successfully" Jan 28 01:27:17.536919 containerd[1726]: time="2026-01-28T01:27:17.536882507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:17.537058 containerd[1726]: time="2026-01-28T01:27:17.537042827Z" level=info msg="RemovePodSandbox \"c6c9888f8df2f131300d9cb7e294ea792c4771319c736b86541d43b0730adfce\" returns successfully" Jan 28 01:27:17.537532 containerd[1726]: time="2026-01-28T01:27:17.537511506Z" level=info msg="StopPodSandbox for \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\"" Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.582 [WARNING][5854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0ef4dca-fc9b-48e6-a83b-e247508a0b04", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5", Pod:"csi-node-driver-kwqqh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4361da1fae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.582 [INFO][5854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.582 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" iface="eth0" netns="" Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.582 [INFO][5854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.582 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.617 [INFO][5861] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" HandleID="k8s-pod-network.433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.617 [INFO][5861] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.617 [INFO][5861] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.625 [WARNING][5861] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" HandleID="k8s-pod-network.433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.625 [INFO][5861] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" HandleID="k8s-pod-network.433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.626 [INFO][5861] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:17.632935 containerd[1726]: 2026-01-28 01:27:17.629 [INFO][5854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:27:17.633490 containerd[1726]: time="2026-01-28T01:27:17.632986151Z" level=info msg="TearDown network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\" successfully" Jan 28 01:27:17.633490 containerd[1726]: time="2026-01-28T01:27:17.633010351Z" level=info msg="StopPodSandbox for \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\" returns successfully" Jan 28 01:27:17.635711 containerd[1726]: time="2026-01-28T01:27:17.635682349Z" level=info msg="RemovePodSandbox for \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\"" Jan 28 01:27:17.635781 containerd[1726]: time="2026-01-28T01:27:17.635716349Z" level=info msg="Forcibly stopping sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\"" Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.678 [WARNING][5875] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0ef4dca-fc9b-48e6-a83b-e247508a0b04", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"fed6dfad5e9c1e984bc80aa576afd7a525cec7219a7f42d55931a9b1a27474b5", Pod:"csi-node-driver-kwqqh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4361da1fae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.679 [INFO][5875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.679 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" iface="eth0" netns="" Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.679 [INFO][5875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.679 [INFO][5875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.714 [INFO][5882] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" HandleID="k8s-pod-network.433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.714 [INFO][5882] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.715 [INFO][5882] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.730 [WARNING][5882] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" HandleID="k8s-pod-network.433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.730 [INFO][5882] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" HandleID="k8s-pod-network.433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Workload="ci--4081.3.6--n--20d4350ff0-k8s-csi--node--driver--kwqqh-eth0" Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.744 [INFO][5882] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:17.751082 containerd[1726]: 2026-01-28 01:27:17.748 [INFO][5875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019" Jan 28 01:27:17.751082 containerd[1726]: time="2026-01-28T01:27:17.751054297Z" level=info msg="TearDown network for sandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\" successfully" Jan 28 01:27:17.759836 containerd[1726]: time="2026-01-28T01:27:17.759773570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:17.759964 containerd[1726]: time="2026-01-28T01:27:17.759877650Z" level=info msg="RemovePodSandbox \"433ce0bddd2837495699c28dbe851ac680a8529b73950735351eb1e471659019\" returns successfully" Jan 28 01:27:17.761918 containerd[1726]: time="2026-01-28T01:27:17.761879849Z" level=info msg="StopPodSandbox for \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\"" Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.818 [WARNING][5896] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50b57260-757d-49ab-b412-157457a311f9", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866", Pod:"goldmane-666569f655-mm8vq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic5416b12b48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.818 [INFO][5896] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.818 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" iface="eth0" netns="" Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.818 [INFO][5896] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.818 [INFO][5896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.853 [INFO][5903] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" HandleID="k8s-pod-network.d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.853 [INFO][5903] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.853 [INFO][5903] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.867 [WARNING][5903] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" HandleID="k8s-pod-network.d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.867 [INFO][5903] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" HandleID="k8s-pod-network.d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.869 [INFO][5903] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:17.874656 containerd[1726]: 2026-01-28 01:27:17.871 [INFO][5896] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:27:17.875050 containerd[1726]: time="2026-01-28T01:27:17.874703039Z" level=info msg="TearDown network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\" successfully" Jan 28 01:27:17.875050 containerd[1726]: time="2026-01-28T01:27:17.874725759Z" level=info msg="StopPodSandbox for \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\" returns successfully" Jan 28 01:27:17.875476 containerd[1726]: time="2026-01-28T01:27:17.875266999Z" level=info msg="RemovePodSandbox for \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\"" Jan 28 01:27:17.875476 containerd[1726]: time="2026-01-28T01:27:17.875297839Z" level=info msg="Forcibly stopping sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\"" Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.918 [WARNING][5917] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50b57260-757d-49ab-b412-157457a311f9", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"244f1083b768aca8853778350776f9d57652d81694974dbfecf478d6d87ae866", Pod:"goldmane-666569f655-mm8vq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic5416b12b48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.919 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.919 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" iface="eth0" netns="" Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.919 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.919 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.941 [INFO][5925] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" HandleID="k8s-pod-network.d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.941 [INFO][5925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.941 [INFO][5925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.951 [WARNING][5925] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" HandleID="k8s-pod-network.d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.951 [INFO][5925] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" HandleID="k8s-pod-network.d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Workload="ci--4081.3.6--n--20d4350ff0-k8s-goldmane--666569f655--mm8vq-eth0" Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.953 [INFO][5925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:17.957380 containerd[1726]: 2026-01-28 01:27:17.955 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873" Jan 28 01:27:17.957791 containerd[1726]: time="2026-01-28T01:27:17.957423454Z" level=info msg="TearDown network for sandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\" successfully" Jan 28 01:27:17.965539 containerd[1726]: time="2026-01-28T01:27:17.965152408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:17.965670 containerd[1726]: time="2026-01-28T01:27:17.965555887Z" level=info msg="RemovePodSandbox \"d45842f6372921168b3c8bdf366aef4d7dee14c46ae2cae07e3e1560378b2873\" returns successfully" Jan 28 01:27:17.966048 containerd[1726]: time="2026-01-28T01:27:17.966010007Z" level=info msg="StopPodSandbox for \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\"" Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.012 [WARNING][5941] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0", GenerateName:"calico-apiserver-84868d5f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"79191197-2837-43fa-b284-2023c360b9e2", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84868d5f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21", Pod:"calico-apiserver-84868d5f79-45qm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib29207a793b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.013 [INFO][5941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.013 [INFO][5941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" iface="eth0" netns="" Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.013 [INFO][5941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.013 [INFO][5941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.034 [INFO][5948] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" HandleID="k8s-pod-network.708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.034 [INFO][5948] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.034 [INFO][5948] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.042 [WARNING][5948] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" HandleID="k8s-pod-network.708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.042 [INFO][5948] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" HandleID="k8s-pod-network.708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.043 [INFO][5948] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:18.050589 containerd[1726]: 2026-01-28 01:27:18.048 [INFO][5941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:27:18.050589 containerd[1726]: time="2026-01-28T01:27:18.050548140Z" level=info msg="TearDown network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\" successfully" Jan 28 01:27:18.051538 containerd[1726]: time="2026-01-28T01:27:18.050572740Z" level=info msg="StopPodSandbox for \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\" returns successfully" Jan 28 01:27:18.053640 containerd[1726]: time="2026-01-28T01:27:18.053279498Z" level=info msg="RemovePodSandbox for \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\"" Jan 28 01:27:18.053710 containerd[1726]: time="2026-01-28T01:27:18.053657618Z" level=info msg="Forcibly stopping sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\"" Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.103 [WARNING][5962] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0", GenerateName:"calico-apiserver-84868d5f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"79191197-2837-43fa-b284-2023c360b9e2", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84868d5f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"401d8b492b8bfd031df64d94978d0b39d56905f21f168c6cc6457c4e379ecf21", Pod:"calico-apiserver-84868d5f79-45qm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib29207a793b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.104 [INFO][5962] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.104 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" iface="eth0" netns="" Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.104 [INFO][5962] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.104 [INFO][5962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.138 [INFO][5969] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" HandleID="k8s-pod-network.708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.139 [INFO][5969] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.139 [INFO][5969] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.149 [WARNING][5969] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" HandleID="k8s-pod-network.708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.150 [INFO][5969] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" HandleID="k8s-pod-network.708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--45qm5-eth0" Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.152 [INFO][5969] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:18.156554 containerd[1726]: 2026-01-28 01:27:18.154 [INFO][5962] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716" Jan 28 01:27:18.157513 containerd[1726]: time="2026-01-28T01:27:18.156622096Z" level=info msg="TearDown network for sandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\" successfully" Jan 28 01:27:18.165482 containerd[1726]: time="2026-01-28T01:27:18.165422689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:18.165603 containerd[1726]: time="2026-01-28T01:27:18.165552009Z" level=info msg="RemovePodSandbox \"708c5e1d92d9cfe8e630f2390e04a1c9070bcd1230738c735c57a334e0275716\" returns successfully" Jan 28 01:27:18.165979 containerd[1726]: time="2026-01-28T01:27:18.165957569Z" level=info msg="StopPodSandbox for \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\"" Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.202 [WARNING][5983] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0", GenerateName:"calico-apiserver-84868d5f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"19367d42-7907-4f04-8c63-bcae87fa9f82", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84868d5f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc", Pod:"calico-apiserver-84868d5f79-sv4mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali536053265ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.203 [INFO][5983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.203 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" iface="eth0" netns="" Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.203 [INFO][5983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.203 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.220 [INFO][5991] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" HandleID="k8s-pod-network.413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.220 [INFO][5991] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.220 [INFO][5991] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.228 [WARNING][5991] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" HandleID="k8s-pod-network.413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.229 [INFO][5991] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" HandleID="k8s-pod-network.413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.230 [INFO][5991] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:18.233501 containerd[1726]: 2026-01-28 01:27:18.231 [INFO][5983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:27:18.234002 containerd[1726]: time="2026-01-28T01:27:18.233451955Z" level=info msg="TearDown network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\" successfully" Jan 28 01:27:18.234002 containerd[1726]: time="2026-01-28T01:27:18.233913435Z" level=info msg="StopPodSandbox for \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\" returns successfully" Jan 28 01:27:18.234536 containerd[1726]: time="2026-01-28T01:27:18.234513954Z" level=info msg="RemovePodSandbox for \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\"" Jan 28 01:27:18.234577 containerd[1726]: time="2026-01-28T01:27:18.234547474Z" level=info msg="Forcibly stopping sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\"" Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.266 [WARNING][6005] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0", GenerateName:"calico-apiserver-84868d5f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"19367d42-7907-4f04-8c63-bcae87fa9f82", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84868d5f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"bde137431fe38ba8636ccb489166ca9bb5f864b3ac15a8a9e6a93ba1e9ab4cfc", Pod:"calico-apiserver-84868d5f79-sv4mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali536053265ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.266 [INFO][6005] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.266 [INFO][6005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" iface="eth0" netns="" Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.266 [INFO][6005] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.266 [INFO][6005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.283 [INFO][6013] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" HandleID="k8s-pod-network.413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.283 [INFO][6013] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.284 [INFO][6013] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.291 [WARNING][6013] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" HandleID="k8s-pod-network.413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.292 [INFO][6013] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" HandleID="k8s-pod-network.413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Workload="ci--4081.3.6--n--20d4350ff0-k8s-calico--apiserver--84868d5f79--sv4mj-eth0" Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.295 [INFO][6013] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:18.298714 containerd[1726]: 2026-01-28 01:27:18.297 [INFO][6005] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3" Jan 28 01:27:18.299326 containerd[1726]: time="2026-01-28T01:27:18.298757263Z" level=info msg="TearDown network for sandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\" successfully" Jan 28 01:27:18.308434 containerd[1726]: time="2026-01-28T01:27:18.308381136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:18.308547 containerd[1726]: time="2026-01-28T01:27:18.308491976Z" level=info msg="RemovePodSandbox \"413c6b138559004a1989dbfdcd02bc3f522093bc71da7c9c38817577d2daa2f3\" returns successfully" Jan 28 01:27:18.308943 containerd[1726]: time="2026-01-28T01:27:18.308920895Z" level=info msg="StopPodSandbox for \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\"" Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.341 [WARNING][6027] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9", Pod:"coredns-668d6bf9bc-fv4g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59ed2d654ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.341 [INFO][6027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.341 [INFO][6027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" iface="eth0" netns="" Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.341 [INFO][6027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.341 [INFO][6027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.363 [INFO][6034] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" HandleID="k8s-pod-network.89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.363 [INFO][6034] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.363 [INFO][6034] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.371 [WARNING][6034] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" HandleID="k8s-pod-network.89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.371 [INFO][6034] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" HandleID="k8s-pod-network.89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.373 [INFO][6034] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:18.376456 containerd[1726]: 2026-01-28 01:27:18.374 [INFO][6027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:27:18.377299 containerd[1726]: time="2026-01-28T01:27:18.376522362Z" level=info msg="TearDown network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\" successfully" Jan 28 01:27:18.377299 containerd[1726]: time="2026-01-28T01:27:18.376546282Z" level=info msg="StopPodSandbox for \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\" returns successfully" Jan 28 01:27:18.377299 containerd[1726]: time="2026-01-28T01:27:18.377291161Z" level=info msg="RemovePodSandbox for \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\"" Jan 28 01:27:18.377421 containerd[1726]: time="2026-01-28T01:27:18.377316081Z" level=info msg="Forcibly stopping sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\"" Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.412 [WARNING][6048] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e7d11ad6-ecf0-4303-8f1f-51aaa54b1ca6", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-20d4350ff0", ContainerID:"18d17fd50a996d1b9477a7116faf06b678cfff47c6fb3cbe03fc4da569056bf9", Pod:"coredns-668d6bf9bc-fv4g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59ed2d654ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.413 [INFO][6048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.413 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" iface="eth0" netns="" Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.413 [INFO][6048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.413 [INFO][6048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.436 [INFO][6055] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" HandleID="k8s-pod-network.89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.436 [INFO][6055] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.436 [INFO][6055] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.444 [WARNING][6055] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" HandleID="k8s-pod-network.89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.444 [INFO][6055] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" HandleID="k8s-pod-network.89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Workload="ci--4081.3.6--n--20d4350ff0-k8s-coredns--668d6bf9bc--fv4g9-eth0" Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.445 [INFO][6055] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:18.449309 containerd[1726]: 2026-01-28 01:27:18.447 [INFO][6048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee" Jan 28 01:27:18.449928 containerd[1726]: time="2026-01-28T01:27:18.449355784Z" level=info msg="TearDown network for sandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\" successfully" Jan 28 01:27:18.456795 containerd[1726]: time="2026-01-28T01:27:18.456752178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:18.456902 containerd[1726]: time="2026-01-28T01:27:18.456816018Z" level=info msg="RemovePodSandbox \"89be069de20bf5aa145668d94d56251764fe263aae93e9351855f8c94b938bee\" returns successfully" Jan 28 01:27:18.947258 kubelet[3198]: E0128 01:27:18.947167 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:27:18.950202 kubelet[3198]: E0128 01:27:18.950131 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:27:19.946512 kubelet[3198]: E0128 01:27:19.944261 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:27:19.946512 kubelet[3198]: E0128 01:27:19.944339 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:27:20.948350 kubelet[3198]: E0128 01:27:20.947332 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:27:22.944444 kubelet[3198]: E0128 01:27:22.944155 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:27:29.945632 kubelet[3198]: E0128 01:27:29.945559 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:27:31.214800 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:46276.service - OpenSSH per-connection server daemon (10.200.16.10:46276). Jan 28 01:27:31.664036 sshd[6069]: Accepted publickey for core from 10.200.16.10 port 46276 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:31.666233 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:31.673260 systemd-logind[1706]: New session 10 of user core. Jan 28 01:27:31.677669 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:27:31.943752 kubelet[3198]: E0128 01:27:31.943640 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:27:32.106966 sshd[6069]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:32.111124 systemd-logind[1706]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:27:32.112741 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:46276.service: Deactivated successfully. Jan 28 01:27:32.116588 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:27:32.120618 systemd-logind[1706]: Removed session 10. Jan 28 01:27:32.945710 kubelet[3198]: E0128 01:27:32.945662 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:27:33.944569 kubelet[3198]: E0128 01:27:33.944514 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:27:34.947003 kubelet[3198]: E0128 01:27:34.946966 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:27:34.948289 kubelet[3198]: E0128 01:27:34.947577 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:27:37.198509 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:46280.service - OpenSSH per-connection server daemon (10.200.16.10:46280). Jan 28 01:27:37.687400 sshd[6089]: Accepted publickey for core from 10.200.16.10 port 46280 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:37.688277 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:37.697793 systemd-logind[1706]: New session 11 of user core. Jan 28 01:27:37.703651 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:27:38.112421 sshd[6089]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:38.116945 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:46280.service: Deactivated successfully. Jan 28 01:27:38.119821 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:27:38.123327 systemd-logind[1706]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:27:38.125075 systemd-logind[1706]: Removed session 11. Jan 28 01:27:41.573780 systemd[1]: run-containerd-runc-k8s.io-1c4be91cd68e119824a4328569c81b0154e835d8312d0c08bb48b99a8b19ffd1-runc.gYaoji.mount: Deactivated successfully. Jan 28 01:27:42.945957 containerd[1726]: time="2026-01-28T01:27:42.945861101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:27:43.202744 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:50202.service - OpenSSH per-connection server daemon (10.200.16.10:50202). Jan 28 01:27:43.220769 containerd[1726]: time="2026-01-28T01:27:43.220610374Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:43.223270 containerd[1726]: time="2026-01-28T01:27:43.223164011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:27:43.223484 containerd[1726]: time="2026-01-28T01:27:43.223378611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:27:43.223777 kubelet[3198]: E0128 01:27:43.223537 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:43.223777 kubelet[3198]: E0128 01:27:43.223583 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:43.223777 kubelet[3198]: E0128 01:27:43.223689 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:41cd963ee3c14a94bb038663169e4951,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8l2wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-766b799ccb-m5599_calico-system(7986b68d-2b69-4fd3-a1ac-2bbd1d928663): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:43.226762 containerd[1726]: time="2026-01-28T01:27:43.226678128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:27:43.491538 containerd[1726]: time="2026-01-28T01:27:43.491009971Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:43.495502 containerd[1726]: time="2026-01-28T01:27:43.495390367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:27:43.495502 containerd[1726]: time="2026-01-28T01:27:43.495465767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:27:43.495648 kubelet[3198]: E0128 01:27:43.495616 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:43.495686 kubelet[3198]: E0128 01:27:43.495664 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:43.495804 kubelet[3198]: E0128 01:27:43.495769 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l2wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-766b799ccb-m5599_calico-system(7986b68d-2b69-4fd3-a1ac-2bbd1d928663): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:43.497061 kubelet[3198]: E0128 01:27:43.497015 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:27:43.650920 sshd[6131]: Accepted publickey for core from 10.200.16.10 port 50202 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:43.652232 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:43.655871 systemd-logind[1706]: New session 12 of user core. Jan 28 01:27:43.665608 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:27:43.943903 kubelet[3198]: E0128 01:27:43.943860 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:27:44.059951 sshd[6131]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:44.064289 systemd-logind[1706]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:27:44.065795 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:50202.service: Deactivated successfully. Jan 28 01:27:44.072869 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:27:44.074124 systemd-logind[1706]: Removed session 12. Jan 28 01:27:44.167585 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:50208.service - OpenSSH per-connection server daemon (10.200.16.10:50208). Jan 28 01:27:44.662374 sshd[6145]: Accepted publickey for core from 10.200.16.10 port 50208 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:44.664202 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:44.670249 systemd-logind[1706]: New session 13 of user core. Jan 28 01:27:44.674628 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:27:45.155672 sshd[6145]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:45.160107 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:50208.service: Deactivated successfully. Jan 28 01:27:45.161521 systemd-logind[1706]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:27:45.164899 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:27:45.167284 systemd-logind[1706]: Removed session 13. Jan 28 01:27:45.236776 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:50218.service - OpenSSH per-connection server daemon (10.200.16.10:50218). Jan 28 01:27:45.686950 sshd[6170]: Accepted publickey for core from 10.200.16.10 port 50218 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:45.688281 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:45.694025 systemd-logind[1706]: New session 14 of user core. Jan 28 01:27:45.700623 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:27:45.943379 kubelet[3198]: E0128 01:27:45.943233 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:27:46.146592 sshd[6170]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:46.149050 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:50218.service: Deactivated successfully. Jan 28 01:27:46.151608 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:27:46.153455 systemd-logind[1706]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:27:46.154533 systemd-logind[1706]: Removed session 14. Jan 28 01:27:46.945278 kubelet[3198]: E0128 01:27:46.945236 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:27:47.945394 containerd[1726]: time="2026-01-28T01:27:47.945025861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:27:48.229483 containerd[1726]: time="2026-01-28T01:27:48.228867378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:48.232526 containerd[1726]: time="2026-01-28T01:27:48.232436935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:27:48.232526 containerd[1726]: time="2026-01-28T01:27:48.232498135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:27:48.232661 kubelet[3198]: E0128 01:27:48.232621 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:48.232915 kubelet[3198]: E0128 01:27:48.232672 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:48.232915 kubelet[3198]: E0128 01:27:48.232783 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc8mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f6944bcdb-mk9w8_calico-system(efad0924-58e2-470d-a190-d57cd8685e98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:48.234193 kubelet[3198]: E0128 01:27:48.234162 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:27:49.943751 containerd[1726]: time="2026-01-28T01:27:49.943531828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:27:50.233902 containerd[1726]: time="2026-01-28T01:27:50.233788299Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:50.236421 containerd[1726]: time="2026-01-28T01:27:50.236382536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:27:50.236580 containerd[1726]: time="2026-01-28T01:27:50.236494016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:50.236647 kubelet[3198]: E0128 01:27:50.236609 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:50.236941 kubelet[3198]: E0128 01:27:50.236653 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:50.236941 kubelet[3198]: E0128 01:27:50.236775 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bd44v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mm8vq_calico-system(50b57260-757d-49ab-b412-157457a311f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:50.238042 kubelet[3198]: E0128 01:27:50.238010 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:27:51.228491 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:51506.service - OpenSSH per-connection server daemon (10.200.16.10:51506). Jan 28 01:27:51.682704 sshd[6190]: Accepted publickey for core from 10.200.16.10 port 51506 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:51.683770 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:51.687398 systemd-logind[1706]: New session 15 of user core. Jan 28 01:27:51.692585 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:27:52.084107 sshd[6190]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:52.086817 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:51506.service: Deactivated successfully. Jan 28 01:27:52.089989 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:27:52.091319 systemd-logind[1706]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:27:52.093594 systemd-logind[1706]: Removed session 15. Jan 28 01:27:55.944604 containerd[1726]: time="2026-01-28T01:27:55.944345566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:56.212799 containerd[1726]: time="2026-01-28T01:27:56.212567437Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:56.216692 containerd[1726]: time="2026-01-28T01:27:56.216553794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:56.216692 containerd[1726]: time="2026-01-28T01:27:56.216659154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:56.216867 kubelet[3198]: E0128 01:27:56.216807 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:56.217524 kubelet[3198]: E0128 01:27:56.216868 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:56.217524 kubelet[3198]: E0128 01:27:56.217002 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhj9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84868d5f79-sv4mj_calico-apiserver(19367d42-7907-4f04-8c63-bcae87fa9f82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:56.218712 kubelet[3198]: E0128 01:27:56.218669 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:27:56.945911 kubelet[3198]: E0128 01:27:56.945669 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:27:57.172543 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:51510.service - OpenSSH per-connection server daemon (10.200.16.10:51510). Jan 28 01:27:57.620589 sshd[6204]: Accepted publickey for core from 10.200.16.10 port 51510 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:57.621728 sshd[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:57.626921 systemd-logind[1706]: New session 16 of user core. Jan 28 01:27:57.630834 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:27:57.944630 containerd[1726]: time="2026-01-28T01:27:57.944315510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:58.014694 sshd[6204]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:58.018225 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:51510.service: Deactivated successfully. Jan 28 01:27:58.020624 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:27:58.021735 systemd-logind[1706]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:27:58.022765 systemd-logind[1706]: Removed session 16. Jan 28 01:27:58.216514 containerd[1726]: time="2026-01-28T01:27:58.216122577Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:58.219669 containerd[1726]: time="2026-01-28T01:27:58.219632494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:58.219749 containerd[1726]: time="2026-01-28T01:27:58.219730414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:58.219907 kubelet[3198]: E0128 01:27:58.219871 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:58.220185 kubelet[3198]: E0128 01:27:58.219918 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:58.220185 kubelet[3198]: E0128 01:27:58.220029 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmhzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84868d5f79-45qm5_calico-apiserver(79191197-2837-43fa-b284-2023c360b9e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:58.221526 kubelet[3198]: E0128 01:27:58.221441 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:28:01.944355 containerd[1726]: time="2026-01-28T01:28:01.944309595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:28:02.237517 containerd[1726]: time="2026-01-28T01:28:02.236484593Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:02.242170 containerd[1726]: time="2026-01-28T01:28:02.242082189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:28:02.242409 containerd[1726]: time="2026-01-28T01:28:02.242317349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:28:02.242636 kubelet[3198]: E0128 01:28:02.242591 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:28:02.242973 kubelet[3198]: E0128 01:28:02.242649 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:28:02.242973 kubelet[3198]: E0128 01:28:02.242761 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqzmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:02.244882 containerd[1726]: time="2026-01-28T01:28:02.244771907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:28:02.533711 containerd[1726]: time="2026-01-28T01:28:02.533596268Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:02.537515 containerd[1726]: time="2026-01-28T01:28:02.537443585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:28:02.537664 containerd[1726]: time="2026-01-28T01:28:02.537493625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:28:02.537935 kubelet[3198]: E0128 01:28:02.537754 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:28:02.538024 kubelet[3198]: E0128 01:28:02.537944 3198 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:28:02.538102 kubelet[3198]: E0128 01:28:02.538064 3198 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqzmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kwqqh_calico-system(b0ef4dca-fc9b-48e6-a83b-e247508a0b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:02.539471 kubelet[3198]: E0128 01:28:02.539422 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:28:02.945524 kubelet[3198]: E0128 01:28:02.944903 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:28:03.102180 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:54890.service - OpenSSH per-connection server daemon (10.200.16.10:54890). Jan 28 01:28:03.593123 sshd[6217]: Accepted publickey for core from 10.200.16.10 port 54890 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:03.594931 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:03.602284 systemd-logind[1706]: New session 17 of user core. Jan 28 01:28:03.606859 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:28:04.042780 sshd[6217]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:04.047416 systemd-logind[1706]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:28:04.049101 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:54890.service: Deactivated successfully. Jan 28 01:28:04.053038 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:28:04.058661 systemd-logind[1706]: Removed session 17. Jan 28 01:28:04.130629 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:54892.service - OpenSSH per-connection server daemon (10.200.16.10:54892). Jan 28 01:28:04.587626 sshd[6229]: Accepted publickey for core from 10.200.16.10 port 54892 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:04.588989 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:04.592602 systemd-logind[1706]: New session 18 of user core. Jan 28 01:28:04.598610 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:28:05.114625 sshd[6229]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:05.119020 systemd-logind[1706]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:28:05.119698 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:54892.service: Deactivated successfully. Jan 28 01:28:05.122106 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:28:05.123007 systemd-logind[1706]: Removed session 18. Jan 28 01:28:05.197872 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:54896.service - OpenSSH per-connection server daemon (10.200.16.10:54896). Jan 28 01:28:05.651030 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 54896 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:05.652502 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:05.657314 systemd-logind[1706]: New session 19 of user core. Jan 28 01:28:05.662707 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:28:05.945926 kubelet[3198]: E0128 01:28:05.944291 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:28:06.728362 sshd[6240]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:06.734057 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:54896.service: Deactivated successfully. Jan 28 01:28:06.736439 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:28:06.738188 systemd-logind[1706]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:28:06.739344 systemd-logind[1706]: Removed session 19. Jan 28 01:28:06.825791 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:54904.service - OpenSSH per-connection server daemon (10.200.16.10:54904). Jan 28 01:28:07.318529 sshd[6263]: Accepted publickey for core from 10.200.16.10 port 54904 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:07.319881 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:07.325741 systemd-logind[1706]: New session 20 of user core. Jan 28 01:28:07.328646 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:28:07.854367 sshd[6263]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:07.857987 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:54904.service: Deactivated successfully. Jan 28 01:28:07.860127 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:28:07.860996 systemd-logind[1706]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:28:07.861815 systemd-logind[1706]: Removed session 20. Jan 28 01:28:07.945402 kubelet[3198]: E0128 01:28:07.945307 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:28:07.948877 kubelet[3198]: E0128 01:28:07.945923 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:28:07.948828 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:54908.service - OpenSSH per-connection server daemon (10.200.16.10:54908). Jan 28 01:28:08.450281 sshd[6274]: Accepted publickey for core from 10.200.16.10 port 54908 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:08.450937 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:08.455064 systemd-logind[1706]: New session 21 of user core. Jan 28 01:28:08.460906 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:28:08.886606 sshd[6274]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:08.890650 systemd-logind[1706]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:28:08.891894 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:54908.service: Deactivated successfully. Jan 28 01:28:08.894963 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:28:08.900324 systemd-logind[1706]: Removed session 21. Jan 28 01:28:11.575496 systemd[1]: run-containerd-runc-k8s.io-1c4be91cd68e119824a4328569c81b0154e835d8312d0c08bb48b99a8b19ffd1-runc.jmTT7f.mount: Deactivated successfully. Jan 28 01:28:11.943636 kubelet[3198]: E0128 01:28:11.943400 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:28:13.943405 kubelet[3198]: E0128 01:28:13.943346 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:28:13.944829 kubelet[3198]: E0128 01:28:13.944562 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:28:13.977060 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:34030.service - OpenSSH per-connection server daemon (10.200.16.10:34030). Jan 28 01:28:14.433482 sshd[6310]: Accepted publickey for core from 10.200.16.10 port 34030 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:14.435570 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:14.443220 systemd-logind[1706]: New session 22 of user core. Jan 28 01:28:14.447870 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:28:14.839484 sshd[6310]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:14.844729 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:34030.service: Deactivated successfully. Jan 28 01:28:14.847433 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:28:14.849392 systemd-logind[1706]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:28:14.850455 systemd-logind[1706]: Removed session 22. Jan 28 01:28:17.943747 kubelet[3198]: E0128 01:28:17.943679 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:28:19.929409 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:33172.service - OpenSSH per-connection server daemon (10.200.16.10:33172). Jan 28 01:28:19.944095 kubelet[3198]: E0128 01:28:19.944037 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:28:19.945652 kubelet[3198]: E0128 01:28:19.945593 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:28:20.377479 sshd[6325]: Accepted publickey for core from 10.200.16.10 port 33172 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:20.378380 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:20.383569 systemd-logind[1706]: New session 23 of user core. Jan 28 01:28:20.389788 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:28:20.769455 sshd[6325]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:20.772400 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:33172.service: Deactivated successfully. Jan 28 01:28:20.775595 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:28:20.778031 systemd-logind[1706]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:28:20.779574 systemd-logind[1706]: Removed session 23. Jan 28 01:28:23.944495 kubelet[3198]: E0128 01:28:23.943099 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:28:25.865753 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:33186.service - OpenSSH per-connection server daemon (10.200.16.10:33186). Jan 28 01:28:25.945804 kubelet[3198]: E0128 01:28:25.943880 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:28:26.353635 sshd[6340]: Accepted publickey for core from 10.200.16.10 port 33186 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:26.355970 sshd[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:26.360936 systemd-logind[1706]: New session 24 of user core. Jan 28 01:28:26.367613 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:28:26.768440 sshd[6340]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:26.771840 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:33186.service: Deactivated successfully. Jan 28 01:28:26.774433 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:28:26.775956 systemd-logind[1706]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:28:26.777342 systemd-logind[1706]: Removed session 24. Jan 28 01:28:28.946514 kubelet[3198]: E0128 01:28:28.946425 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04" Jan 28 01:28:29.944006 kubelet[3198]: E0128 01:28:29.943944 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mm8vq" podUID="50b57260-757d-49ab-b412-157457a311f9" Jan 28 01:28:31.842702 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:49498.service - OpenSSH per-connection server daemon (10.200.16.10:49498). Jan 28 01:28:32.252868 sshd[6354]: Accepted publickey for core from 10.200.16.10 port 49498 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:32.254245 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:32.261531 systemd-logind[1706]: New session 25 of user core. Jan 28 01:28:32.265605 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:28:32.630489 sshd[6354]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:32.636809 systemd-logind[1706]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:28:32.637140 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:49498.service: Deactivated successfully. Jan 28 01:28:32.639941 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:28:32.643932 systemd-logind[1706]: Removed session 25. Jan 28 01:28:32.946722 kubelet[3198]: E0128 01:28:32.945306 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-766b799ccb-m5599" podUID="7986b68d-2b69-4fd3-a1ac-2bbd1d928663" Jan 28 01:28:33.943877 kubelet[3198]: E0128 01:28:33.943770 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-sv4mj" podUID="19367d42-7907-4f04-8c63-bcae87fa9f82" Jan 28 01:28:36.946780 kubelet[3198]: E0128 01:28:36.946165 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84868d5f79-45qm5" podUID="79191197-2837-43fa-b284-2023c360b9e2" Jan 28 01:28:37.726540 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:49504.service - OpenSSH per-connection server daemon (10.200.16.10:49504). Jan 28 01:28:37.943137 kubelet[3198]: E0128 01:28:37.943090 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f6944bcdb-mk9w8" podUID="efad0924-58e2-470d-a190-d57cd8685e98" Jan 28 01:28:38.217911 sshd[6367]: Accepted publickey for core from 10.200.16.10 port 49504 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:38.219274 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:38.224053 systemd-logind[1706]: New session 26 of user core. Jan 28 01:28:38.231643 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:28:38.649413 sshd[6367]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:38.652982 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:49504.service: Deactivated successfully. Jan 28 01:28:38.656953 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:28:38.657776 systemd-logind[1706]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:28:38.659255 systemd-logind[1706]: Removed session 26. Jan 28 01:28:41.434309 kubelet[3198]: E0128 01:28:41.434246 3198 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: EOF" Jan 28 01:28:41.944219 kubelet[3198]: E0128 01:28:41.944129 3198 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kwqqh" podUID="b0ef4dca-fc9b-48e6-a83b-e247508a0b04"