Jan 28 01:20:53.207813 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 28 01:20:53.207836 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Jan 27 23:05:14 -00 2026 Jan 28 01:20:53.207844 kernel: KASLR enabled Jan 28 01:20:53.207850 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 01:20:53.207857 kernel: printk: bootconsole [pl11] enabled Jan 28 01:20:53.207863 kernel: efi: EFI v2.7 by EDK II Jan 28 01:20:53.207871 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 28 01:20:53.207877 kernel: random: crng init done Jan 28 01:20:53.207883 kernel: ACPI: Early table checksum verification disabled Jan 28 01:20:53.207889 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 28 01:20:53.207895 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207901 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207908 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 01:20:53.207915 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207922 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207928 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207935 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207943 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207949 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207964 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 01:20:53.207972 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207979 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 01:20:53.207985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 28 01:20:53.207992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 28 01:20:53.207998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 28 01:20:53.208005 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 28 01:20:53.208011 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 28 01:20:53.208018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 28 01:20:53.208026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 28 01:20:53.208032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 28 01:20:53.208039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 28 01:20:53.208045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 28 01:20:53.208052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 28 01:20:53.208058 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 28 01:20:53.208064 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 28 01:20:53.208071 kernel: Zone ranges: Jan 28 01:20:53.208077 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 01:20:53.208083 kernel: DMA32 empty Jan 28 01:20:53.208090 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:20:53.208096 kernel: Movable zone start for each node Jan 28 01:20:53.208107 kernel: Early memory node ranges Jan 28 01:20:53.208114 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 01:20:53.208121 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 28 01:20:53.208127 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 28 01:20:53.208134 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 28 01:20:53.208142 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 28 01:20:53.208149 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 28 01:20:53.208156 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:20:53.208163 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 01:20:53.208170 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 01:20:53.208177 kernel: psci: probing for conduit method from ACPI. Jan 28 01:20:53.208183 kernel: psci: PSCIv1.1 detected in firmware. Jan 28 01:20:53.208190 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 01:20:53.208197 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 01:20:53.208204 kernel: psci: SMC Calling Convention v1.4 Jan 28 01:20:53.208211 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 01:20:53.208217 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 01:20:53.208226 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 28 01:20:53.208233 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 28 01:20:53.208240 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 01:20:53.208246 kernel: Detected PIPT I-cache on CPU0 Jan 28 01:20:53.208253 kernel: CPU features: detected: GIC system register CPU interface Jan 28 01:20:53.208260 kernel: CPU features: detected: Hardware dirty bit management Jan 28 01:20:53.208267 kernel: CPU features: detected: Spectre-BHB Jan 28 01:20:53.208274 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 01:20:53.208281 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 01:20:53.208288 kernel: CPU features: detected: ARM erratum 1418040 Jan 28 01:20:53.208295 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 28 01:20:53.208303 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 01:20:53.208310 kernel: alternatives: applying boot alternatives Jan 28 01:20:53.208319 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:20:53.208326 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:20:53.208333 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:20:53.208340 kernel: Fallback order for Node 0: 0 Jan 28 01:20:53.208347 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 28 01:20:53.208354 kernel: Policy zone: Normal Jan 28 01:20:53.208361 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:20:53.208368 kernel: software IO TLB: area num 2. Jan 28 01:20:53.208375 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 28 01:20:53.208384 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 28 01:20:53.208391 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 01:20:53.208398 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:20:53.208405 kernel: rcu: RCU event tracing is enabled. Jan 28 01:20:53.208412 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 01:20:53.208419 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:20:53.208426 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:20:53.208433 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:20:53.208440 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 01:20:53.208447 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 01:20:53.208454 kernel: GICv3: 960 SPIs implemented Jan 28 01:20:53.208462 kernel: GICv3: 0 Extended SPIs implemented Jan 28 01:20:53.208469 kernel: Root IRQ handler: gic_handle_irq Jan 28 01:20:53.208476 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 01:20:53.208482 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 01:20:53.208489 kernel: ITS: No ITS available, not enabling LPIs Jan 28 01:20:53.208496 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:20:53.208503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:20:53.208510 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 28 01:20:53.208517 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 28 01:20:53.208524 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 28 01:20:53.208531 kernel: Console: colour dummy device 80x25 Jan 28 01:20:53.208539 kernel: printk: console [tty1] enabled Jan 28 01:20:53.208547 kernel: ACPI: Core revision 20230628 Jan 28 01:20:53.208554 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 28 01:20:53.208561 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:20:53.208568 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:20:53.208575 kernel: landlock: Up and running. Jan 28 01:20:53.208582 kernel: SELinux: Initializing. Jan 28 01:20:53.208589 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:20:53.208596 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:20:53.208605 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:20:53.208612 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:20:53.208620 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 28 01:20:53.208627 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 28 01:20:53.208634 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 01:20:53.208641 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:20:53.208648 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:20:53.208656 kernel: Remapping and enabling EFI services. Jan 28 01:20:53.208669 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:20:53.208677 kernel: Detected PIPT I-cache on CPU1 Jan 28 01:20:53.208684 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 01:20:53.208691 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:20:53.208700 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 28 01:20:53.208707 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 01:20:53.208715 kernel: SMP: Total of 2 processors activated. Jan 28 01:20:53.208722 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 01:20:53.208730 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 01:20:53.208739 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 01:20:53.208747 kernel: CPU features: detected: CRC32 instructions Jan 28 01:20:53.208754 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 01:20:53.208762 kernel: CPU features: detected: LSE atomic instructions Jan 28 01:20:53.208769 kernel: CPU features: detected: Privileged Access Never Jan 28 01:20:53.208777 kernel: CPU: All CPU(s) started at EL1 Jan 28 01:20:53.208784 kernel: alternatives: applying system-wide alternatives Jan 28 01:20:53.208791 kernel: devtmpfs: initialized Jan 28 01:20:53.208799 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:20:53.208808 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 01:20:53.208816 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:20:53.208823 kernel: SMBIOS 3.1.0 present. Jan 28 01:20:53.208831 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 28 01:20:53.208838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:20:53.208846 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 01:20:53.208853 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 01:20:53.208861 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 01:20:53.208868 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:20:53.208877 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 28 01:20:53.208885 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:20:53.208892 kernel: cpuidle: using governor menu Jan 28 01:20:53.208900 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 01:20:53.208907 kernel: ASID allocator initialised with 32768 entries Jan 28 01:20:53.208914 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:20:53.208922 kernel: Serial: AMBA PL011 UART driver Jan 28 01:20:53.208929 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 01:20:53.208937 kernel: Modules: 0 pages in range for non-PLT usage Jan 28 01:20:53.208946 kernel: Modules: 509008 pages in range for PLT usage Jan 28 01:20:53.208953 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:20:53.211474 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:20:53.211482 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 01:20:53.211490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 01:20:53.211497 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:20:53.211505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:20:53.211512 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 01:20:53.211520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 01:20:53.211530 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:20:53.211537 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:20:53.211545 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:20:53.211552 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:20:53.211559 kernel: ACPI: Interpreter enabled Jan 28 01:20:53.211567 kernel: ACPI: Using GIC for interrupt routing Jan 28 01:20:53.211574 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 01:20:53.211582 kernel: printk: console [ttyAMA0] enabled Jan 28 01:20:53.211589 kernel: printk: bootconsole [pl11] disabled Jan 28 01:20:53.211598 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 01:20:53.211606 kernel: iommu: Default domain type: Translated Jan 28 01:20:53.211614 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 01:20:53.211624 kernel: efivars: Registered efivars operations Jan 28 01:20:53.211632 kernel: vgaarb: loaded Jan 28 01:20:53.211639 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 01:20:53.211646 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:20:53.211654 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:20:53.211661 kernel: pnp: PnP ACPI init Jan 28 01:20:53.211670 kernel: pnp: PnP ACPI: found 0 devices Jan 28 01:20:53.211678 kernel: NET: Registered PF_INET protocol family Jan 28 01:20:53.211685 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:20:53.211693 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:20:53.211701 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:20:53.211709 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:20:53.211716 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:20:53.211724 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:20:53.211732 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:20:53.211741 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:20:53.211748 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:20:53.211756 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:20:53.211763 kernel: kvm [1]: HYP mode not available Jan 28 01:20:53.211771 kernel: Initialise system trusted keyrings Jan 28 01:20:53.211778 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:20:53.211786 kernel: Key type asymmetric registered Jan 28 01:20:53.211793 kernel: Asymmetric key parser 'x509' registered Jan 28 01:20:53.211800 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:20:53.211810 kernel: io scheduler mq-deadline registered Jan 28 01:20:53.211817 kernel: io scheduler kyber registered Jan 28 01:20:53.211825 kernel: io scheduler bfq registered Jan 28 01:20:53.211832 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:20:53.211840 kernel: thunder_xcv, ver 1.0 Jan 28 01:20:53.211847 kernel: thunder_bgx, ver 1.0 Jan 28 01:20:53.211854 kernel: nicpf, ver 1.0 Jan 28 01:20:53.211862 kernel: nicvf, ver 1.0 Jan 28 01:20:53.212021 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 01:20:53.212104 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T01:20:52 UTC (1769563252) Jan 28 01:20:53.212115 kernel: efifb: probing for efifb Jan 28 01:20:53.212123 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 01:20:53.212130 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 01:20:53.212138 kernel: efifb: scrolling: redraw Jan 28 01:20:53.212145 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:20:53.212153 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:20:53.212160 kernel: fb0: EFI VGA frame buffer device Jan 28 01:20:53.212170 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 01:20:53.212177 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 01:20:53.212185 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 28 01:20:53.212192 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 28 01:20:53.212200 kernel: watchdog: Hard watchdog permanently disabled Jan 28 01:20:53.212207 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:20:53.212214 kernel: Segment Routing with IPv6 Jan 28 01:20:53.212221 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:20:53.212229 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:20:53.212238 kernel: Key type dns_resolver registered Jan 28 01:20:53.212245 kernel: registered taskstats version 1 Jan 28 01:20:53.212252 kernel: Loading compiled-in X.509 certificates Jan 28 01:20:53.212260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 00ce1dc8bc64b61f07099b23b76dee034878817c' Jan 28 01:20:53.212267 kernel: Key type .fscrypt registered Jan 28 01:20:53.212274 kernel: Key type fscrypt-provisioning registered Jan 28 01:20:53.212282 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:20:53.212289 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:20:53.212297 kernel: ima: No architecture policies found Jan 28 01:20:53.212306 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 01:20:53.212313 kernel: clk: Disabling unused clocks Jan 28 01:20:53.212321 kernel: Freeing unused kernel memory: 39424K Jan 28 01:20:53.212328 kernel: Run /init as init process Jan 28 01:20:53.212336 kernel: with arguments: Jan 28 01:20:53.212343 kernel: /init Jan 28 01:20:53.212350 kernel: with environment: Jan 28 01:20:53.212357 kernel: HOME=/ Jan 28 01:20:53.212365 kernel: TERM=linux Jan 28 01:20:53.212374 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:20:53.212385 systemd[1]: Detected virtualization microsoft. Jan 28 01:20:53.212393 systemd[1]: Detected architecture arm64. Jan 28 01:20:53.212401 systemd[1]: Running in initrd. Jan 28 01:20:53.212408 systemd[1]: No hostname configured, using default hostname. Jan 28 01:20:53.212416 systemd[1]: Hostname set to . Jan 28 01:20:53.212424 systemd[1]: Initializing machine ID from random generator. Jan 28 01:20:53.212433 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:20:53.212441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:20:53.212449 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:20:53.212458 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:20:53.212466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:20:53.212475 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:20:53.212483 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:20:53.212492 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:20:53.212502 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:20:53.212510 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:20:53.212518 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:20:53.212526 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:20:53.212534 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:20:53.212542 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:20:53.212550 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:20:53.212558 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:20:53.212568 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:20:53.212576 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:20:53.212584 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:20:53.212592 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:20:53.212600 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:20:53.212608 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:20:53.212616 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:20:53.212625 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:20:53.212634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:20:53.212642 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:20:53.212650 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:20:53.212658 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:20:53.212666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:20:53.212689 systemd-journald[217]: Collecting audit messages is disabled. Jan 28 01:20:53.212710 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:53.212720 systemd-journald[217]: Journal started Jan 28 01:20:53.212738 systemd-journald[217]: Runtime Journal (/run/log/journal/e75815e87a094c0ba6af3af618f489d7) is 8.0M, max 78.5M, 70.5M free. Jan 28 01:20:53.205989 systemd-modules-load[218]: Inserted module 'overlay' Jan 28 01:20:53.227043 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:20:53.228980 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:20:53.256710 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:20:53.256733 kernel: Bridge firewalling registered Jan 28 01:20:53.244883 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 28 01:20:53.245404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:20:53.252374 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:20:53.260529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:20:53.269480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:53.293188 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:20:53.305119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:20:53.315114 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:20:53.334132 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:20:53.340201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:20:53.357315 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:20:53.364249 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:20:53.372156 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:20:53.400134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:20:53.412218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:20:53.430329 dracut-cmdline[250]: dracut-dracut-053 Jan 28 01:20:53.439170 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:20:53.435126 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:20:53.445052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:20:53.503338 systemd-resolved[261]: Positive Trust Anchors: Jan 28 01:20:53.503354 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:20:53.503385 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:20:53.505545 systemd-resolved[261]: Defaulting to hostname 'linux'. Jan 28 01:20:53.512156 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:20:53.519408 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:20:53.601976 kernel: SCSI subsystem initialized Jan 28 01:20:53.608967 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:20:53.618977 kernel: iscsi: registered transport (tcp) Jan 28 01:20:53.635881 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:20:53.635922 kernel: QLogic iSCSI HBA Driver Jan 28 01:20:53.674245 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:20:53.688195 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:20:53.717824 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:20:53.717886 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:20:53.724000 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:20:53.772981 kernel: raid6: neonx8 gen() 15807 MB/s Jan 28 01:20:53.791970 kernel: raid6: neonx4 gen() 15694 MB/s Jan 28 01:20:53.810966 kernel: raid6: neonx2 gen() 13274 MB/s Jan 28 01:20:53.830968 kernel: raid6: neonx1 gen() 10486 MB/s Jan 28 01:20:53.849962 kernel: raid6: int64x8 gen() 6975 MB/s Jan 28 01:20:53.868966 kernel: raid6: int64x4 gen() 7354 MB/s Jan 28 01:20:53.888967 kernel: raid6: int64x2 gen() 6146 MB/s Jan 28 01:20:53.910760 kernel: raid6: int64x1 gen() 5072 MB/s Jan 28 01:20:53.910770 kernel: raid6: using algorithm neonx8 gen() 15807 MB/s Jan 28 01:20:53.933983 kernel: raid6: .... xor() 11887 MB/s, rmw enabled Jan 28 01:20:53.934002 kernel: raid6: using neon recovery algorithm Jan 28 01:20:53.943990 kernel: xor: measuring software checksum speed Jan 28 01:20:53.944002 kernel: 8regs : 19759 MB/sec Jan 28 01:20:53.947863 kernel: 32regs : 19669 MB/sec Jan 28 01:20:53.950762 kernel: arm64_neon : 27195 MB/sec Jan 28 01:20:53.954346 kernel: xor: using function: arm64_neon (27195 MB/sec) Jan 28 01:20:54.003979 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:20:54.014011 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:20:54.030145 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:20:54.050572 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 28 01:20:54.055123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:20:54.071684 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:20:54.090760 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Jan 28 01:20:54.119468 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:20:54.132477 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:20:54.170548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:20:54.184188 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:20:54.201606 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:20:54.208351 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:20:54.220040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:20:54.242174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:20:54.266680 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:20:54.284490 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:20:54.294432 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 01:20:54.311328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:20:54.350299 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 01:20:54.350326 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 28 01:20:54.350337 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 01:20:54.350346 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 01:20:54.350517 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 01:20:54.350529 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 28 01:20:54.317158 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:20:54.369530 kernel: scsi host1: storvsc_host_t Jan 28 01:20:54.369700 kernel: scsi host0: storvsc_host_t Jan 28 01:20:54.352785 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:20:54.398413 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 01:20:54.398469 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 01:20:54.398479 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 01:20:54.398489 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 01:20:54.369262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:20:54.415975 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 01:20:54.369432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:54.392112 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:54.421853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:54.444417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:54.462336 kernel: PTP clock support registered Jan 28 01:20:54.464448 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:20:54.482739 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:20:54.508012 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 01:20:54.508036 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 01:20:54.508215 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:20:54.508226 kernel: hv_vmbus: registering driver hv_utils Jan 28 01:20:54.508236 kernel: hv_netvsc 002248bb-4041-0022-48bb-4041002248bb eth0: VF slot 1 added Jan 28 01:20:54.508330 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 01:20:54.482845 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:20:54.353951 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 01:20:54.369327 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 01:20:54.369345 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 01:20:54.369355 systemd-journald[217]: Time jumped backwards, rotating. Jan 28 01:20:54.512150 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:20:54.387698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:20:54.389326 kernel: hv_vmbus: registering driver hv_pci Jan 28 01:20:54.512293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:54.407601 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 01:20:54.407780 kernel: hv_pci 0136b8db-c8cc-48d9-8a94-7f072ac2c5e7: PCI VMBus probing: Using version 0x10004 Jan 28 01:20:54.407978 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 01:20:54.351890 systemd-resolved[261]: Clock change detected. Flushing caches. Jan 28 01:20:54.418475 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 01:20:54.421312 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 01:20:54.421420 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 01:20:54.363245 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:54.444728 kernel: hv_pci 0136b8db-c8cc-48d9-8a94-7f072ac2c5e7: PCI host bridge to bus c8cc:00 Jan 28 01:20:54.444906 kernel: pci_bus c8cc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 01:20:54.379072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:54.464639 kernel: pci_bus c8cc:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 01:20:54.464808 kernel: pci c8cc:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 28 01:20:54.433572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:54.476813 kernel: pci c8cc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:20:54.476891 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:20:54.477029 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:20:54.503133 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 01:20:54.503312 kernel: pci c8cc:00:02.0: enabling Extended Tags Jan 28 01:20:54.503333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:20:54.531497 kernel: pci c8cc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c8cc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 28 01:20:54.555371 kernel: pci_bus c8cc:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 01:20:54.555567 kernel: pci c8cc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:20:54.557040 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:20:54.604271 kernel: mlx5_core c8cc:00:02.0: enabling device (0000 -> 0002) Jan 28 01:20:54.610846 kernel: mlx5_core c8cc:00:02.0: firmware version: 16.30.5026 Jan 28 01:20:54.806391 kernel: hv_netvsc 002248bb-4041-0022-48bb-4041002248bb eth0: VF registering: eth1 Jan 28 01:20:54.806574 kernel: mlx5_core c8cc:00:02.0 eth1: joined to eth0 Jan 28 01:20:54.813044 kernel: mlx5_core c8cc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 01:20:54.823856 kernel: mlx5_core c8cc:00:02.0 enP51404s1: renamed from eth1 Jan 28 01:20:55.077031 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 01:20:55.097787 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 01:20:55.118856 kernel: BTRFS: device fsid 0fc26676-8036-4cd5-8c30-2943afb25b0b devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (489) Jan 28 01:20:55.132343 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 01:20:55.138180 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 28 01:20:55.163056 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:20:55.280929 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (482) Jan 28 01:20:55.293304 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 01:20:56.191931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:20:56.192638 disk-uuid[609]: The operation has completed successfully. Jan 28 01:20:56.252927 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:20:56.253012 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:20:56.285033 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:20:56.295461 sh[699]: Success Jan 28 01:20:56.322878 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 28 01:20:56.679297 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:20:56.687970 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:20:56.692381 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:20:56.732301 kernel: BTRFS info (device dm-0): first mount of filesystem 0fc26676-8036-4cd5-8c30-2943afb25b0b Jan 28 01:20:56.732348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:20:56.738288 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:20:56.742729 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:20:56.746303 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:20:57.076080 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:20:57.081386 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:20:57.099005 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:20:57.108029 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:20:57.135470 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:20:57.135524 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:20:57.139510 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:20:57.175263 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:20:57.182278 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:20:57.193852 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:20:57.200567 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:20:57.215097 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:20:57.220851 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:20:57.237737 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:20:57.263985 systemd-networkd[883]: lo: Link UP Jan 28 01:20:57.263993 systemd-networkd[883]: lo: Gained carrier Jan 28 01:20:57.265651 systemd-networkd[883]: Enumeration completed Jan 28 01:20:57.268646 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:20:57.274322 systemd-networkd[883]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:20:57.274326 systemd-networkd[883]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:20:57.274978 systemd[1]: Reached target network.target - Network. Jan 28 01:20:57.356856 kernel: mlx5_core c8cc:00:02.0 enP51404s1: Link up Jan 28 01:20:57.400845 kernel: hv_netvsc 002248bb-4041-0022-48bb-4041002248bb eth0: Data path switched to VF: enP51404s1 Jan 28 01:20:57.401396 systemd-networkd[883]: enP51404s1: Link UP Jan 28 01:20:57.401477 systemd-networkd[883]: eth0: Link UP Jan 28 01:20:57.401571 systemd-networkd[883]: eth0: Gained carrier Jan 28 01:20:57.401579 systemd-networkd[883]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:20:57.412182 systemd-networkd[883]: enP51404s1: Gained carrier Jan 28 01:20:57.431871 systemd-networkd[883]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:20:58.133905 ignition[881]: Ignition 2.19.0 Jan 28 01:20:58.133916 ignition[881]: Stage: fetch-offline Jan 28 01:20:58.137200 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:20:58.133951 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:20:58.133958 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:20:58.134060 ignition[881]: parsed url from cmdline: "" Jan 28 01:20:58.134063 ignition[881]: no config URL provided Jan 28 01:20:58.134067 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:20:58.161080 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 01:20:58.134075 ignition[881]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:20:58.134080 ignition[881]: failed to fetch config: resource requires networking Jan 28 01:20:58.134290 ignition[881]: Ignition finished successfully Jan 28 01:20:58.178863 ignition[892]: Ignition 2.19.0 Jan 28 01:20:58.178871 ignition[892]: Stage: fetch Jan 28 01:20:58.179089 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:20:58.179098 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:20:58.179226 ignition[892]: parsed url from cmdline: "" Jan 28 01:20:58.179229 ignition[892]: no config URL provided Jan 28 01:20:58.179234 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:20:58.179241 ignition[892]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:20:58.179265 ignition[892]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 01:20:58.265574 ignition[892]: GET result: OK Jan 28 01:20:58.265644 ignition[892]: config has been read from IMDS userdata Jan 28 01:20:58.265686 ignition[892]: parsing config with SHA512: aa76b82bbf8e34d533e3e837ff9aad6b31635d45b0e0b09456041586a2d4b6eb6fa3710eb29d010ba1e1b26e80dc722c2508297d4fc80b6596342eef1f17bb61 Jan 28 01:20:58.269527 unknown[892]: fetched base config from "system" Jan 28 01:20:58.270088 ignition[892]: fetch: fetch complete Jan 28 01:20:58.269534 unknown[892]: fetched base config from "system" Jan 28 01:20:58.270093 ignition[892]: fetch: fetch passed Jan 28 01:20:58.269547 unknown[892]: fetched user config from "azure" Jan 28 01:20:58.270146 ignition[892]: Ignition finished successfully Jan 28 01:20:58.272005 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 01:20:58.289067 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:20:58.312897 ignition[898]: Ignition 2.19.0 Jan 28 01:20:58.312906 ignition[898]: Stage: kargs Jan 28 01:20:58.319092 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:20:58.313070 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:20:58.313079 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:20:58.313936 ignition[898]: kargs: kargs passed Jan 28 01:20:58.333127 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:20:58.313982 ignition[898]: Ignition finished successfully Jan 28 01:20:58.358402 ignition[904]: Ignition 2.19.0 Jan 28 01:20:58.358412 ignition[904]: Stage: disks Jan 28 01:20:58.362524 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:20:58.358589 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:20:58.369205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:20:58.358599 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:20:58.378229 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:20:58.359598 ignition[904]: disks: disks passed Jan 28 01:20:58.387442 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:20:58.359646 ignition[904]: Ignition finished successfully Jan 28 01:20:58.396784 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:20:58.406513 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:20:58.433000 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:20:58.525352 systemd-fsck[913]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 28 01:20:58.532084 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:20:58.546091 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:20:58.601904 kernel: EXT4-fs (sda9): mounted filesystem 2c7419f5-3bc3-4c5f-b132-f03585db88cd r/w with ordered data mode. Quota mode: none. Jan 28 01:20:58.601872 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:20:58.606083 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:20:58.651936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:20:58.671882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (924) Jan 28 01:20:58.683000 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:20:58.683054 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:20:58.686564 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:20:58.696513 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:20:58.693033 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:20:58.701038 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 01:20:58.707536 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:20:58.707571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:20:58.720241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:20:58.733549 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:20:58.752078 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:20:58.867994 systemd-networkd[883]: eth0: Gained IPv6LL Jan 28 01:20:59.266092 coreos-metadata[941]: Jan 28 01:20:59.266 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:20:59.275420 coreos-metadata[941]: Jan 28 01:20:59.275 INFO Fetch successful Jan 28 01:20:59.279607 coreos-metadata[941]: Jan 28 01:20:59.275 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:20:59.290419 coreos-metadata[941]: Jan 28 01:20:59.290 INFO Fetch successful Jan 28 01:20:59.337894 coreos-metadata[941]: Jan 28 01:20:59.337 INFO wrote hostname ci-4081.3.6-n-6d8ceced70 to /sysroot/etc/hostname Jan 28 01:20:59.345952 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:20:59.674792 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:20:59.712946 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:20:59.738446 initrd-setup-root[967]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:20:59.758117 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:21:01.152739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:21:01.165297 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:21:01.174009 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:21:01.191191 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:21:01.188127 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:21:01.213731 ignition[1042]: INFO : Ignition 2.19.0 Jan 28 01:21:01.219928 ignition[1042]: INFO : Stage: mount Jan 28 01:21:01.219928 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:21:01.219928 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:21:01.219928 ignition[1042]: INFO : mount: mount passed Jan 28 01:21:01.219928 ignition[1042]: INFO : Ignition finished successfully Jan 28 01:21:01.216172 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:21:01.223398 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:21:01.243998 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:21:01.270059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:21:01.289847 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1054) Jan 28 01:21:01.301284 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:21:01.301300 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:21:01.306073 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:21:01.315844 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:21:01.317387 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:21:01.338848 ignition[1072]: INFO : Ignition 2.19.0 Jan 28 01:21:01.338848 ignition[1072]: INFO : Stage: files Jan 28 01:21:01.345340 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:21:01.345340 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:21:01.345340 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:21:01.345340 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:21:01.345340 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:21:01.445251 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:21:01.451698 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:21:01.451698 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:21:01.447268 unknown[1072]: wrote ssh authorized keys file for user: core Jan 28 01:21:01.479759 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 28 01:21:01.488592 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 28 01:21:01.543156 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:21:01.771932 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 28 01:21:01.771932 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 28 01:21:02.262737 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:21:02.617590 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 01:21:02.617590 ignition[1072]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:21:02.647774 ignition[1072]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: files passed Jan 28 01:21:02.657100 ignition[1072]: INFO : Ignition finished successfully Jan 28 01:21:02.666187 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:21:02.708512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:21:02.717992 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:21:02.726383 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:21:02.730921 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:21:02.761878 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:21:02.761878 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:21:02.776921 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:21:02.777458 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:21:02.790094 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:21:02.812088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:21:02.839406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:21:02.841899 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:21:02.850487 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:21:02.860927 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:21:02.870362 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:21:02.873031 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:21:02.902495 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:21:02.916098 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:21:02.935981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:21:02.936095 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:21:02.946636 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:21:02.957530 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:21:02.968692 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:21:02.978328 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:21:02.978394 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:21:02.992280 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:21:03.002523 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:21:03.011517 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:21:03.020733 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:21:03.031155 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:21:03.041769 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:21:03.051405 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:21:03.062644 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:21:03.073601 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:21:03.082948 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:21:03.091438 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:21:03.091506 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:21:03.104946 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:21:03.114815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:21:03.125404 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:21:03.130577 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:21:03.136692 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:21:03.136749 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:21:03.152631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:21:03.152677 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:21:03.162702 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:21:03.162747 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:21:03.171993 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 01:21:03.172041 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:21:03.233870 ignition[1124]: INFO : Ignition 2.19.0 Jan 28 01:21:03.233870 ignition[1124]: INFO : Stage: umount Jan 28 01:21:03.233870 ignition[1124]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:21:03.233870 ignition[1124]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:21:03.233870 ignition[1124]: INFO : umount: umount passed Jan 28 01:21:03.233870 ignition[1124]: INFO : Ignition finished successfully Jan 28 01:21:03.190068 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:21:03.201020 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:21:03.210964 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:21:03.211048 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:21:03.226756 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:21:03.226820 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:21:03.245700 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:21:03.246241 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:21:03.246340 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:21:03.257328 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:21:03.257452 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:21:03.263770 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:21:03.263826 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:21:03.275221 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 01:21:03.275281 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 01:21:03.279956 systemd[1]: Stopped target network.target - Network. Jan 28 01:21:03.287361 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:21:03.287412 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:21:03.296768 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:21:03.305992 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:21:03.310092 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:21:03.316299 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:21:03.325043 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:21:03.334436 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:21:03.334493 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:21:03.343389 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:21:03.343423 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:21:03.352793 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:21:03.352848 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:21:03.362660 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:21:03.362714 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:21:03.372228 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:21:03.380769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:21:03.389289 systemd-networkd[883]: eth0: DHCPv6 lease lost Jan 28 01:21:03.391359 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:21:03.583555 kernel: hv_netvsc 002248bb-4041-0022-48bb-4041002248bb eth0: Data path switched from VF: enP51404s1 Jan 28 01:21:03.391529 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:21:03.409501 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:21:03.409688 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:21:03.420205 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:21:03.420257 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:21:03.446064 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:21:03.454657 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:21:03.454727 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:21:03.467882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:21:03.467933 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:21:03.476328 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:21:03.476368 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:21:03.485222 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:21:03.485261 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:21:03.494706 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:21:03.534101 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:21:03.534268 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:21:03.544717 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:21:03.544762 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:21:03.553978 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:21:03.554008 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:21:03.571385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:21:03.571443 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:21:03.583628 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:21:03.583689 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:21:03.593098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:21:03.593156 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:21:03.633186 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:21:03.643896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:21:03.643971 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:21:03.655547 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 01:21:03.655608 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:21:03.665716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:21:03.665769 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:21:03.676006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:21:03.676048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:21:03.686021 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:21:03.686142 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:21:03.697040 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:21:03.698856 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:21:03.860064 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:21:03.860167 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:21:03.873078 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:21:03.877750 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:21:03.877806 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:21:03.897102 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:21:03.926818 systemd[1]: Switching root. Jan 28 01:21:03.992493 systemd-journald[217]: Journal stopped Jan 28 01:20:53.207813 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 28 01:20:53.207836 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Jan 27 23:05:14 -00 2026 Jan 28 01:20:53.207844 kernel: KASLR enabled Jan 28 01:20:53.207850 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 01:20:53.207857 kernel: printk: bootconsole [pl11] enabled Jan 28 01:20:53.207863 kernel: efi: EFI v2.7 by EDK II Jan 28 01:20:53.207871 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 28 01:20:53.207877 kernel: random: crng init done Jan 28 01:20:53.207883 kernel: ACPI: Early table checksum verification disabled Jan 28 01:20:53.207889 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 28 01:20:53.207895 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207901 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207908 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 01:20:53.207915 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207922 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207928 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207935 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207943 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207949 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207964 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 01:20:53.207972 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:20:53.207979 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 01:20:53.207985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 28 01:20:53.207992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 28 01:20:53.207998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 28 01:20:53.208005 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 28 01:20:53.208011 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 28 01:20:53.208018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 28 01:20:53.208026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 28 01:20:53.208032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 28 01:20:53.208039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 28 01:20:53.208045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 28 01:20:53.208052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 28 01:20:53.208058 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 28 01:20:53.208064 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 28 01:20:53.208071 kernel: Zone ranges: Jan 28 01:20:53.208077 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 01:20:53.208083 kernel: DMA32 empty Jan 28 01:20:53.208090 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:20:53.208096 kernel: Movable zone start for each node Jan 28 01:20:53.208107 kernel: Early memory node ranges Jan 28 01:20:53.208114 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 01:20:53.208121 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 28 01:20:53.208127 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 28 01:20:53.208134 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 28 01:20:53.208142 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 28 01:20:53.208149 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 28 01:20:53.208156 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:20:53.208163 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 01:20:53.208170 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 01:20:53.208177 kernel: psci: probing for conduit method from ACPI. Jan 28 01:20:53.208183 kernel: psci: PSCIv1.1 detected in firmware. Jan 28 01:20:53.208190 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 01:20:53.208197 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 01:20:53.208204 kernel: psci: SMC Calling Convention v1.4 Jan 28 01:20:53.208211 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 01:20:53.208217 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 01:20:53.208226 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 28 01:20:53.208233 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 28 01:20:53.208240 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 01:20:53.208246 kernel: Detected PIPT I-cache on CPU0 Jan 28 01:20:53.208253 kernel: CPU features: detected: GIC system register CPU interface Jan 28 01:20:53.208260 kernel: CPU features: detected: Hardware dirty bit management Jan 28 01:20:53.208267 kernel: CPU features: detected: Spectre-BHB Jan 28 01:20:53.208274 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 01:20:53.208281 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 01:20:53.208288 kernel: CPU features: detected: ARM erratum 1418040 Jan 28 01:20:53.208295 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 28 01:20:53.208303 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 01:20:53.208310 kernel: alternatives: applying boot alternatives Jan 28 01:20:53.208319 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:20:53.208326 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:20:53.208333 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:20:53.208340 kernel: Fallback order for Node 0: 0 Jan 28 01:20:53.208347 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 28 01:20:53.208354 kernel: Policy zone: Normal Jan 28 01:20:53.208361 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:20:53.208368 kernel: software IO TLB: area num 2. Jan 28 01:20:53.208375 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 28 01:20:53.208384 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 28 01:20:53.208391 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 01:20:53.208398 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:20:53.208405 kernel: rcu: RCU event tracing is enabled. Jan 28 01:20:53.208412 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 01:20:53.208419 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:20:53.208426 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:20:53.208433 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:20:53.208440 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 01:20:53.208447 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 01:20:53.208454 kernel: GICv3: 960 SPIs implemented Jan 28 01:20:53.208462 kernel: GICv3: 0 Extended SPIs implemented Jan 28 01:20:53.208469 kernel: Root IRQ handler: gic_handle_irq Jan 28 01:20:53.208476 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 01:20:53.208482 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 01:20:53.208489 kernel: ITS: No ITS available, not enabling LPIs Jan 28 01:20:53.208496 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:20:53.208503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:20:53.208510 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 28 01:20:53.208517 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 28 01:20:53.208524 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 28 01:20:53.208531 kernel: Console: colour dummy device 80x25 Jan 28 01:20:53.208539 kernel: printk: console [tty1] enabled Jan 28 01:20:53.208547 kernel: ACPI: Core revision 20230628 Jan 28 01:20:53.208554 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 28 01:20:53.208561 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:20:53.208568 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:20:53.208575 kernel: landlock: Up and running. Jan 28 01:20:53.208582 kernel: SELinux: Initializing. Jan 28 01:20:53.208589 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:20:53.208596 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:20:53.208605 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:20:53.208612 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:20:53.208620 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 28 01:20:53.208627 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 28 01:20:53.208634 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 01:20:53.208641 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:20:53.208648 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:20:53.208656 kernel: Remapping and enabling EFI services. Jan 28 01:20:53.208669 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:20:53.208677 kernel: Detected PIPT I-cache on CPU1 Jan 28 01:20:53.208684 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 01:20:53.208691 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:20:53.208700 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 28 01:20:53.208707 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 01:20:53.208715 kernel: SMP: Total of 2 processors activated. Jan 28 01:20:53.208722 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 01:20:53.208730 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 01:20:53.208739 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 01:20:53.208747 kernel: CPU features: detected: CRC32 instructions Jan 28 01:20:53.208754 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 01:20:53.208762 kernel: CPU features: detected: LSE atomic instructions Jan 28 01:20:53.208769 kernel: CPU features: detected: Privileged Access Never Jan 28 01:20:53.208777 kernel: CPU: All CPU(s) started at EL1 Jan 28 01:20:53.208784 kernel: alternatives: applying system-wide alternatives Jan 28 01:20:53.208791 kernel: devtmpfs: initialized Jan 28 01:20:53.208799 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:20:53.208808 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 01:20:53.208816 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:20:53.208823 kernel: SMBIOS 3.1.0 present. Jan 28 01:20:53.208831 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 28 01:20:53.208838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:20:53.208846 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 01:20:53.208853 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 01:20:53.208861 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 01:20:53.208868 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:20:53.208877 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 28 01:20:53.208885 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:20:53.208892 kernel: cpuidle: using governor menu Jan 28 01:20:53.208900 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 01:20:53.208907 kernel: ASID allocator initialised with 32768 entries Jan 28 01:20:53.208914 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:20:53.208922 kernel: Serial: AMBA PL011 UART driver Jan 28 01:20:53.208929 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 01:20:53.208937 kernel: Modules: 0 pages in range for non-PLT usage Jan 28 01:20:53.208946 kernel: Modules: 509008 pages in range for PLT usage Jan 28 01:20:53.208953 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:20:53.211474 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:20:53.211482 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 01:20:53.211490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 01:20:53.211497 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:20:53.211505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:20:53.211512 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 01:20:53.211520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 01:20:53.211530 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:20:53.211537 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:20:53.211545 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:20:53.211552 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:20:53.211559 kernel: ACPI: Interpreter enabled Jan 28 01:20:53.211567 kernel: ACPI: Using GIC for interrupt routing Jan 28 01:20:53.211574 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 01:20:53.211582 kernel: printk: console [ttyAMA0] enabled Jan 28 01:20:53.211589 kernel: printk: bootconsole [pl11] disabled Jan 28 01:20:53.211598 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 01:20:53.211606 kernel: iommu: Default domain type: Translated Jan 28 01:20:53.211614 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 01:20:53.211624 kernel: efivars: Registered efivars operations Jan 28 01:20:53.211632 kernel: vgaarb: loaded Jan 28 01:20:53.211639 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 01:20:53.211646 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:20:53.211654 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:20:53.211661 kernel: pnp: PnP ACPI init Jan 28 01:20:53.211670 kernel: pnp: PnP ACPI: found 0 devices Jan 28 01:20:53.211678 kernel: NET: Registered PF_INET protocol family Jan 28 01:20:53.211685 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:20:53.211693 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:20:53.211701 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:20:53.211709 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:20:53.211716 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:20:53.211724 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:20:53.211732 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:20:53.211741 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:20:53.211748 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:20:53.211756 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:20:53.211763 kernel: kvm [1]: HYP mode not available Jan 28 01:20:53.211771 kernel: Initialise system trusted keyrings Jan 28 01:20:53.211778 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:20:53.211786 kernel: Key type asymmetric registered Jan 28 01:20:53.211793 kernel: Asymmetric key parser 'x509' registered Jan 28 01:20:53.211800 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:20:53.211810 kernel: io scheduler mq-deadline registered Jan 28 01:20:53.211817 kernel: io scheduler kyber registered Jan 28 01:20:53.211825 kernel: io scheduler bfq registered Jan 28 01:20:53.211832 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:20:53.211840 kernel: thunder_xcv, ver 1.0 Jan 28 01:20:53.211847 kernel: thunder_bgx, ver 1.0 Jan 28 01:20:53.211854 kernel: nicpf, ver 1.0 Jan 28 01:20:53.211862 kernel: nicvf, ver 1.0 Jan 28 01:20:53.212021 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 01:20:53.212104 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T01:20:52 UTC (1769563252) Jan 28 01:20:53.212115 kernel: efifb: probing for efifb Jan 28 01:20:53.212123 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 01:20:53.212130 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 01:20:53.212138 kernel: efifb: scrolling: redraw Jan 28 01:20:53.212145 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:20:53.212153 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:20:53.212160 kernel: fb0: EFI VGA frame buffer device Jan 28 01:20:53.212170 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 01:20:53.212177 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 01:20:53.212185 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 28 01:20:53.212192 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 28 01:20:53.212200 kernel: watchdog: Hard watchdog permanently disabled Jan 28 01:20:53.212207 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:20:53.212214 kernel: Segment Routing with IPv6 Jan 28 01:20:53.212221 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:20:53.212229 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:20:53.212238 kernel: Key type dns_resolver registered Jan 28 01:20:53.212245 kernel: registered taskstats version 1 Jan 28 01:20:53.212252 kernel: Loading compiled-in X.509 certificates Jan 28 01:20:53.212260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 00ce1dc8bc64b61f07099b23b76dee034878817c' Jan 28 01:20:53.212267 kernel: Key type .fscrypt registered Jan 28 01:20:53.212274 kernel: Key type fscrypt-provisioning registered Jan 28 01:20:53.212282 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:20:53.212289 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:20:53.212297 kernel: ima: No architecture policies found Jan 28 01:20:53.212306 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 01:20:53.212313 kernel: clk: Disabling unused clocks Jan 28 01:20:53.212321 kernel: Freeing unused kernel memory: 39424K Jan 28 01:20:53.212328 kernel: Run /init as init process Jan 28 01:20:53.212336 kernel: with arguments: Jan 28 01:20:53.212343 kernel: /init Jan 28 01:20:53.212350 kernel: with environment: Jan 28 01:20:53.212357 kernel: HOME=/ Jan 28 01:20:53.212365 kernel: TERM=linux Jan 28 01:20:53.212374 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:20:53.212385 systemd[1]: Detected virtualization microsoft. Jan 28 01:20:53.212393 systemd[1]: Detected architecture arm64. Jan 28 01:20:53.212401 systemd[1]: Running in initrd. Jan 28 01:20:53.212408 systemd[1]: No hostname configured, using default hostname. Jan 28 01:20:53.212416 systemd[1]: Hostname set to . Jan 28 01:20:53.212424 systemd[1]: Initializing machine ID from random generator. Jan 28 01:20:53.212433 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:20:53.212441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:20:53.212449 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:20:53.212458 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:20:53.212466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:20:53.212475 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:20:53.212483 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:20:53.212492 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:20:53.212502 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:20:53.212510 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:20:53.212518 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:20:53.212526 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:20:53.212534 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:20:53.212542 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:20:53.212550 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:20:53.212558 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:20:53.212568 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:20:53.212576 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:20:53.212584 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:20:53.212592 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:20:53.212600 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:20:53.212608 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:20:53.212616 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:20:53.212625 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:20:53.212634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:20:53.212642 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:20:53.212650 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:20:53.212658 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:20:53.212666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:20:53.212689 systemd-journald[217]: Collecting audit messages is disabled. Jan 28 01:20:53.212710 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:53.212720 systemd-journald[217]: Journal started Jan 28 01:20:53.212738 systemd-journald[217]: Runtime Journal (/run/log/journal/e75815e87a094c0ba6af3af618f489d7) is 8.0M, max 78.5M, 70.5M free. Jan 28 01:20:53.205989 systemd-modules-load[218]: Inserted module 'overlay' Jan 28 01:20:53.227043 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:20:53.228980 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:20:53.256710 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:20:53.256733 kernel: Bridge firewalling registered Jan 28 01:20:53.244883 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 28 01:20:53.245404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:20:53.252374 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:20:53.260529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:20:53.269480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:53.293188 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:20:53.305119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:20:53.315114 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:20:53.334132 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:20:53.340201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:20:53.357315 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:20:53.364249 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:20:53.372156 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:20:53.400134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:20:53.412218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:20:53.430329 dracut-cmdline[250]: dracut-dracut-053 Jan 28 01:20:53.439170 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:20:53.435126 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:20:53.445052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:20:53.503338 systemd-resolved[261]: Positive Trust Anchors: Jan 28 01:20:53.503354 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:20:53.503385 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:20:53.505545 systemd-resolved[261]: Defaulting to hostname 'linux'. Jan 28 01:20:53.512156 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:20:53.519408 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:20:53.601976 kernel: SCSI subsystem initialized Jan 28 01:20:53.608967 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:20:53.618977 kernel: iscsi: registered transport (tcp) Jan 28 01:20:53.635881 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:20:53.635922 kernel: QLogic iSCSI HBA Driver Jan 28 01:20:53.674245 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:20:53.688195 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:20:53.717824 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:20:53.717886 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:20:53.724000 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:20:53.772981 kernel: raid6: neonx8 gen() 15807 MB/s Jan 28 01:20:53.791970 kernel: raid6: neonx4 gen() 15694 MB/s Jan 28 01:20:53.810966 kernel: raid6: neonx2 gen() 13274 MB/s Jan 28 01:20:53.830968 kernel: raid6: neonx1 gen() 10486 MB/s Jan 28 01:20:53.849962 kernel: raid6: int64x8 gen() 6975 MB/s Jan 28 01:20:53.868966 kernel: raid6: int64x4 gen() 7354 MB/s Jan 28 01:20:53.888967 kernel: raid6: int64x2 gen() 6146 MB/s Jan 28 01:20:53.910760 kernel: raid6: int64x1 gen() 5072 MB/s Jan 28 01:20:53.910770 kernel: raid6: using algorithm neonx8 gen() 15807 MB/s Jan 28 01:20:53.933983 kernel: raid6: .... xor() 11887 MB/s, rmw enabled Jan 28 01:20:53.934002 kernel: raid6: using neon recovery algorithm Jan 28 01:20:53.943990 kernel: xor: measuring software checksum speed Jan 28 01:20:53.944002 kernel: 8regs : 19759 MB/sec Jan 28 01:20:53.947863 kernel: 32regs : 19669 MB/sec Jan 28 01:20:53.950762 kernel: arm64_neon : 27195 MB/sec Jan 28 01:20:53.954346 kernel: xor: using function: arm64_neon (27195 MB/sec) Jan 28 01:20:54.003979 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:20:54.014011 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:20:54.030145 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:20:54.050572 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 28 01:20:54.055123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:20:54.071684 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:20:54.090760 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Jan 28 01:20:54.119468 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:20:54.132477 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:20:54.170548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:20:54.184188 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:20:54.201606 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:20:54.208351 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:20:54.220040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:20:54.242174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:20:54.266680 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:20:54.284490 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:20:54.294432 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 01:20:54.311328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:20:54.350299 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 01:20:54.350326 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 28 01:20:54.350337 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 01:20:54.350346 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 01:20:54.350517 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 01:20:54.350529 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 28 01:20:54.317158 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:20:54.369530 kernel: scsi host1: storvsc_host_t Jan 28 01:20:54.369700 kernel: scsi host0: storvsc_host_t Jan 28 01:20:54.352785 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:20:54.398413 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 01:20:54.398469 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 01:20:54.398479 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 01:20:54.398489 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 01:20:54.369262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:20:54.415975 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 01:20:54.369432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:54.392112 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:54.421853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:54.444417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:54.462336 kernel: PTP clock support registered Jan 28 01:20:54.464448 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:20:54.482739 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:20:54.508012 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 01:20:54.508036 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 01:20:54.508215 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:20:54.508226 kernel: hv_vmbus: registering driver hv_utils Jan 28 01:20:54.508236 kernel: hv_netvsc 002248bb-4041-0022-48bb-4041002248bb eth0: VF slot 1 added Jan 28 01:20:54.508330 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 01:20:54.482845 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:20:54.353951 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 01:20:54.369327 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 01:20:54.369345 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 01:20:54.369355 systemd-journald[217]: Time jumped backwards, rotating. Jan 28 01:20:54.512150 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:20:54.387698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#40 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:20:54.389326 kernel: hv_vmbus: registering driver hv_pci Jan 28 01:20:54.512293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:54.407601 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 01:20:54.407780 kernel: hv_pci 0136b8db-c8cc-48d9-8a94-7f072ac2c5e7: PCI VMBus probing: Using version 0x10004 Jan 28 01:20:54.407978 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 01:20:54.351890 systemd-resolved[261]: Clock change detected. Flushing caches. Jan 28 01:20:54.418475 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 01:20:54.421312 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 01:20:54.421420 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 01:20:54.363245 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:54.444728 kernel: hv_pci 0136b8db-c8cc-48d9-8a94-7f072ac2c5e7: PCI host bridge to bus c8cc:00 Jan 28 01:20:54.444906 kernel: pci_bus c8cc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 01:20:54.379072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:20:54.464639 kernel: pci_bus c8cc:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 01:20:54.464808 kernel: pci c8cc:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 28 01:20:54.433572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:20:54.476813 kernel: pci c8cc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:20:54.476891 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:20:54.477029 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:20:54.503133 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 01:20:54.503312 kernel: pci c8cc:00:02.0: enabling Extended Tags Jan 28 01:20:54.503333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:20:54.531497 kernel: pci c8cc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c8cc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 28 01:20:54.555371 kernel: pci_bus c8cc:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 01:20:54.555567 kernel: pci c8cc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:20:54.557040 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:20:54.604271 kernel: mlx5_core c8cc:00:02.0: enabling device (0000 -> 0002) Jan 28 01:20:54.610846 kernel: mlx5_core c8cc:00:02.0: firmware version: 16.30.5026 Jan 28 01:20:54.806391 kernel: hv_netvsc 002248bb-4041-0022-48bb-4041002248bb eth0: VF registering: eth1 Jan 28 01:20:54.806574 kernel: mlx5_core c8cc:00:02.0 eth1: joined to eth0 Jan 28 01:20:54.813044 kernel: mlx5_core c8cc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 01:20:54.823856 kernel: mlx5_core c8cc:00:02.0 enP51404s1: renamed from eth1 Jan 28 01:20:55.077031 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 01:20:55.097787 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 01:20:55.118856 kernel: BTRFS: device fsid 0fc26676-8036-4cd5-8c30-2943afb25b0b devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (489) Jan 28 01:20:55.132343 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 01:20:55.138180 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 28 01:20:55.163056 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:20:55.280929 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (482) Jan 28 01:20:55.293304 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 01:20:56.191931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:20:56.192638 disk-uuid[609]: The operation has completed successfully. Jan 28 01:20:56.252927 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:20:56.253012 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:20:56.285033 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:20:56.295461 sh[699]: Success Jan 28 01:20:56.322878 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 28 01:20:56.679297 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:20:56.687970 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:20:56.692381 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:20:56.732301 kernel: BTRFS info (device dm-0): first mount of filesystem 0fc26676-8036-4cd5-8c30-2943afb25b0b Jan 28 01:20:56.732348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:20:56.738288 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:20:56.742729 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:20:56.746303 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:20:57.076080 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:20:57.081386 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:20:57.099005 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:20:57.108029 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:20:57.135470 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:20:57.135524 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:20:57.139510 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:20:57.175263 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:20:57.182278 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:20:57.193852 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:20:57.200567 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:20:57.215097 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:20:57.220851 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:20:57.237737 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:20:57.263985 systemd-networkd[883]: lo: Link UP Jan 28 01:20:57.263993 systemd-networkd[883]: lo: Gained carrier Jan 28 01:20:57.265651 systemd-networkd[883]: Enumeration completed Jan 28 01:20:57.268646 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:20:57.274322 systemd-networkd[883]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:20:57.274326 systemd-networkd[883]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:20:57.274978 systemd[1]: Reached target network.target - Network. Jan 28 01:20:57.356856 kernel: mlx5_core c8cc:00:02.0 enP51404s1: Link up Jan 28 01:20:57.400845 kernel: hv_netvsc 002248bb-4041-0022-48bb-4041002248bb eth0: Data path switched to VF: enP51404s1 Jan 28 01:20:57.401396 systemd-networkd[883]: enP51404s1: Link UP Jan 28 01:20:57.401477 systemd-networkd[883]: eth0: Link UP Jan 28 01:20:57.401571 systemd-networkd[883]: eth0: Gained carrier Jan 28 01:20:57.401579 systemd-networkd[883]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:20:57.412182 systemd-networkd[883]: enP51404s1: Gained carrier Jan 28 01:20:57.431871 systemd-networkd[883]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:20:58.133905 ignition[881]: Ignition 2.19.0 Jan 28 01:20:58.133916 ignition[881]: Stage: fetch-offline Jan 28 01:20:58.137200 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:20:58.133951 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:20:58.133958 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:20:58.134060 ignition[881]: parsed url from cmdline: "" Jan 28 01:20:58.134063 ignition[881]: no config URL provided Jan 28 01:20:58.134067 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:20:58.161080 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 01:20:58.134075 ignition[881]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:20:58.134080 ignition[881]: failed to fetch config: resource requires networking Jan 28 01:20:58.134290 ignition[881]: Ignition finished successfully Jan 28 01:20:58.178863 ignition[892]: Ignition 2.19.0 Jan 28 01:20:58.178871 ignition[892]: Stage: fetch Jan 28 01:20:58.179089 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:20:58.179098 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:20:58.179226 ignition[892]: parsed url from cmdline: "" Jan 28 01:20:58.179229 ignition[892]: no config URL provided Jan 28 01:20:58.179234 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:20:58.179241 ignition[892]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:20:58.179265 ignition[892]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 01:20:58.265574 ignition[892]: GET result: OK Jan 28 01:20:58.265644 ignition[892]: config has been read from IMDS userdata Jan 28 01:20:58.265686 ignition[892]: parsing config with SHA512: aa76b82bbf8e34d533e3e837ff9aad6b31635d45b0e0b09456041586a2d4b6eb6fa3710eb29d010ba1e1b26e80dc722c2508297d4fc80b6596342eef1f17bb61 Jan 28 01:20:58.269527 unknown[892]: fetched base config from "system" Jan 28 01:20:58.270088 ignition[892]: fetch: fetch complete Jan 28 01:20:58.269534 unknown[892]: fetched base config from "system" Jan 28 01:20:58.270093 ignition[892]: fetch: fetch passed Jan 28 01:20:58.269547 unknown[892]: fetched user config from "azure" Jan 28 01:20:58.270146 ignition[892]: Ignition finished successfully Jan 28 01:20:58.272005 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 01:20:58.289067 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:20:58.312897 ignition[898]: Ignition 2.19.0 Jan 28 01:20:58.312906 ignition[898]: Stage: kargs Jan 28 01:20:58.319092 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:20:58.313070 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:20:58.313079 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:20:58.313936 ignition[898]: kargs: kargs passed Jan 28 01:20:58.333127 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:20:58.313982 ignition[898]: Ignition finished successfully Jan 28 01:20:58.358402 ignition[904]: Ignition 2.19.0 Jan 28 01:20:58.358412 ignition[904]: Stage: disks Jan 28 01:20:58.362524 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:20:58.358589 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:20:58.369205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:20:58.358599 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:20:58.378229 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:20:58.359598 ignition[904]: disks: disks passed Jan 28 01:20:58.387442 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:20:58.359646 ignition[904]: Ignition finished successfully Jan 28 01:20:58.396784 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:20:58.406513 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:20:58.433000 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:20:58.525352 systemd-fsck[913]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 28 01:20:58.532084 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:20:58.546091 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:20:58.601904 kernel: EXT4-fs (sda9): mounted filesystem 2c7419f5-3bc3-4c5f-b132-f03585db88cd r/w with ordered data mode. Quota mode: none. Jan 28 01:20:58.601872 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:20:58.606083 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:20:58.651936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:20:58.671882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (924) Jan 28 01:20:58.683000 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:20:58.683054 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:20:58.686564 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:20:58.696513 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:20:58.693033 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:20:58.701038 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 01:20:58.707536 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:20:58.707571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:20:58.720241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:20:58.733549 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:20:58.752078 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:20:58.867994 systemd-networkd[883]: eth0: Gained IPv6LL Jan 28 01:20:59.266092 coreos-metadata[941]: Jan 28 01:20:59.266 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:20:59.275420 coreos-metadata[941]: Jan 28 01:20:59.275 INFO Fetch successful Jan 28 01:20:59.279607 coreos-metadata[941]: Jan 28 01:20:59.275 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:20:59.290419 coreos-metadata[941]: Jan 28 01:20:59.290 INFO Fetch successful Jan 28 01:20:59.337894 coreos-metadata[941]: Jan 28 01:20:59.337 INFO wrote hostname ci-4081.3.6-n-6d8ceced70 to /sysroot/etc/hostname Jan 28 01:20:59.345952 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:20:59.674792 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:20:59.712946 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:20:59.738446 initrd-setup-root[967]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:20:59.758117 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:21:01.152739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:21:01.165297 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:21:01.174009 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:21:01.191191 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:21:01.188127 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:21:01.213731 ignition[1042]: INFO : Ignition 2.19.0 Jan 28 01:21:01.219928 ignition[1042]: INFO : Stage: mount Jan 28 01:21:01.219928 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:21:01.219928 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:21:01.219928 ignition[1042]: INFO : mount: mount passed Jan 28 01:21:01.219928 ignition[1042]: INFO : Ignition finished successfully Jan 28 01:21:01.216172 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:21:01.223398 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:21:01.243998 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:21:01.270059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:21:01.289847 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1054) Jan 28 01:21:01.301284 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:21:01.301300 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:21:01.306073 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:21:01.315844 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:21:01.317387 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:21:01.338848 ignition[1072]: INFO : Ignition 2.19.0 Jan 28 01:21:01.338848 ignition[1072]: INFO : Stage: files Jan 28 01:21:01.345340 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:21:01.345340 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:21:01.345340 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:21:01.345340 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:21:01.345340 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:21:01.445251 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:21:01.451698 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:21:01.451698 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:21:01.447268 unknown[1072]: wrote ssh authorized keys file for user: core Jan 28 01:21:01.479759 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 28 01:21:01.488592 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 28 01:21:01.543156 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:21:01.771932 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 28 01:21:01.771932 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 01:21:01.789049 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 28 01:21:02.262737 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:21:02.617590 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 28 01:21:02.617590 ignition[1072]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:21:02.647774 ignition[1072]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:21:02.657100 ignition[1072]: INFO : files: files passed Jan 28 01:21:02.657100 ignition[1072]: INFO : Ignition finished successfully Jan 28 01:21:02.666187 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:21:02.708512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:21:02.717992 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:21:02.726383 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:21:02.730921 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:21:02.761878 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:21:02.761878 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:21:02.776921 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:21:02.777458 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:21:02.790094 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:21:02.812088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:21:02.839406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:21:02.841899 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:21:02.850487 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:21:02.860927 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:21:02.870362 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:21:02.873031 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:21:02.902495 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:21:02.916098 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:21:02.935981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:21:02.936095 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:21:02.946636 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:21:02.957530 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:21:02.968692 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:21:02.978328 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:21:02.978394 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:21:02.992280 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:21:03.002523 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:21:03.011517 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:21:03.020733 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:21:03.031155 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:21:03.041769 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:21:03.051405 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:21:03.062644 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:21:03.073601 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:21:03.082948 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:21:03.091438 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:21:03.091506 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:21:03.104946 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:21:03.114815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:21:03.125404 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:21:03.130577 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:21:03.136692 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:21:03.136749 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:21:03.152631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:21:03.152677 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:21:03.162702 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:21:03.162747 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:21:03.171993 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 01:21:03.172041 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:21:03.233870 ignition[1124]: INFO : Ignition 2.19.0 Jan 28 01:21:03.233870 ignition[1124]: INFO : Stage: umount Jan 28 01:21:03.233870 ignition[1124]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:21:03.233870 ignition[1124]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:21:03.233870 ignition[1124]: INFO : umount: umount passed Jan 28 01:21:03.233870 ignition[1124]: INFO : Ignition finished successfully Jan 28 01:21:03.190068 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:21:03.201020 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:21:03.210964 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:21:03.211048 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:21:03.226756 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:21:03.226820 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:21:03.245700 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:21:03.246241 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:21:03.246340 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:21:03.257328 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:21:03.257452 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:21:03.263770 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:21:03.263826 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:21:03.275221 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 01:21:03.275281 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 01:21:03.279956 systemd[1]: Stopped target network.target - Network. Jan 28 01:21:03.287361 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:21:03.287412 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:21:03.296768 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:21:03.305992 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:21:03.310092 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:21:03.316299 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:21:03.325043 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:21:03.334436 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:21:03.334493 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:21:03.343389 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:21:03.343423 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:21:03.352793 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:21:03.352848 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:21:03.362660 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:21:03.362714 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:21:03.372228 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:21:03.380769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:21:03.389289 systemd-networkd[883]: eth0: DHCPv6 lease lost Jan 28 01:21:03.391359 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:21:03.583555 kernel: hv_netvsc 002248bb-4041-0022-48bb-4041002248bb eth0: Data path switched from VF: enP51404s1 Jan 28 01:21:03.391529 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:21:03.409501 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:21:03.409688 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:21:03.420205 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:21:03.420257 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:21:03.446064 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:21:03.454657 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:21:03.454727 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:21:03.467882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:21:03.467933 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:21:03.476328 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:21:03.476368 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:21:03.485222 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:21:03.485261 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:21:03.494706 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:21:03.534101 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:21:03.534268 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:21:03.544717 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:21:03.544762 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:21:03.553978 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:21:03.554008 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:21:03.571385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:21:03.571443 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:21:03.583628 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:21:03.583689 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:21:03.593098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:21:03.593156 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:21:03.633186 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:21:03.643896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:21:03.643971 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:21:03.655547 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 01:21:03.655608 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:21:03.665716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:21:03.665769 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:21:03.676006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:21:03.676048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:21:03.686021 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:21:03.686142 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:21:03.697040 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:21:03.698856 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:21:03.860064 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:21:03.860167 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:21:03.873078 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:21:03.877750 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:21:03.877806 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:21:03.897102 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:21:03.926818 systemd[1]: Switching root. Jan 28 01:21:03.992493 systemd-journald[217]: Journal stopped Jan 28 01:21:09.197147 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 28 01:21:09.197187 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:21:09.197198 kernel: SELinux: policy capability open_perms=1 Jan 28 01:21:09.197211 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:21:09.197219 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:21:09.197228 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:21:09.197237 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:21:09.197246 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:21:09.197254 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:21:09.197263 systemd[1]: Successfully loaded SELinux policy in 200.992ms. Jan 28 01:21:09.197275 kernel: audit: type=1403 audit(1769563265.270:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 01:21:09.197284 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.958ms. Jan 28 01:21:09.197295 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:21:09.197305 systemd[1]: Detected virtualization microsoft. Jan 28 01:21:09.197315 systemd[1]: Detected architecture arm64. Jan 28 01:21:09.197326 systemd[1]: Detected first boot. Jan 28 01:21:09.197335 systemd[1]: Hostname set to . Jan 28 01:21:09.197345 systemd[1]: Initializing machine ID from random generator. Jan 28 01:21:09.197355 zram_generator::config[1165]: No configuration found. Jan 28 01:21:09.197365 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:21:09.197374 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:21:09.197385 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:21:09.197396 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:21:09.197407 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:21:09.197417 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:21:09.197427 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:21:09.197437 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:21:09.197447 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:21:09.197459 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:21:09.197469 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:21:09.197479 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:21:09.197489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:21:09.197499 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:21:09.197509 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:21:09.197519 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:21:09.197529 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:21:09.197539 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:21:09.197551 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 28 01:21:09.197561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:21:09.197571 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:21:09.197583 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:21:09.197593 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:21:09.197603 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:21:09.197614 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:21:09.197626 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:21:09.197636 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:21:09.197646 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:21:09.197656 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:21:09.197666 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:21:09.197676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:21:09.197686 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:21:09.197698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:21:09.197712 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:21:09.197723 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:21:09.197733 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:21:09.197743 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:21:09.197753 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:21:09.197765 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:21:09.197775 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:21:09.197786 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:21:09.197797 systemd[1]: Reached target machines.target - Containers. Jan 28 01:21:09.197807 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:21:09.197817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:21:09.197828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:21:09.197855 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:21:09.197869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:21:09.197880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:21:09.197890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:21:09.197900 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:21:09.197910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:21:09.197920 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:21:09.197931 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:21:09.197941 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:21:09.197951 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:21:09.197962 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:21:09.197972 kernel: fuse: init (API version 7.39) Jan 28 01:21:09.197982 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:21:09.197991 kernel: ACPI: bus type drm_connector registered Jan 28 01:21:09.198001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:21:09.198011 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:21:09.198020 kernel: loop: module loaded Jan 28 01:21:09.198030 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:21:09.198040 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:21:09.198079 systemd-journald[1258]: Collecting audit messages is disabled. Jan 28 01:21:09.198101 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 01:21:09.198112 systemd-journald[1258]: Journal started Jan 28 01:21:09.198135 systemd-journald[1258]: Runtime Journal (/run/log/journal/99a3a42947714c09a6c9deea3b2baada) is 8.0M, max 78.5M, 70.5M free. Jan 28 01:21:08.289897 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:21:08.438087 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 28 01:21:08.438455 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:21:08.438762 systemd[1]: systemd-journald.service: Consumed 2.698s CPU time. Jan 28 01:21:09.205207 systemd[1]: Stopped verity-setup.service. Jan 28 01:21:09.220244 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:21:09.223505 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:21:09.228433 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:21:09.233876 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:21:09.238547 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:21:09.243748 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:21:09.249063 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:21:09.254870 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:21:09.260722 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:21:09.267097 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:21:09.267221 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:21:09.273651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:21:09.273780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:21:09.279601 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:21:09.279725 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:21:09.285336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:21:09.285464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:21:09.291349 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:21:09.291470 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:21:09.296887 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:21:09.297004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:21:09.302619 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:21:09.308081 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:21:09.314203 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:21:09.319987 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:21:09.333931 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:21:09.350967 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:21:09.356826 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:21:09.362067 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:21:09.362103 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:21:09.367456 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 01:21:09.375150 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:21:09.381609 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:21:09.386444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:21:09.388220 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:21:09.394779 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:21:09.400057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:21:09.401385 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:21:09.406459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:21:09.408016 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:21:09.416055 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:21:09.422927 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:21:09.444031 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 01:21:09.453726 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:21:09.462583 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:21:09.469496 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:21:09.475367 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:21:09.485116 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:21:09.486852 kernel: loop0: detected capacity change from 0 to 200800 Jan 28 01:21:09.494982 systemd-journald[1258]: Time spent on flushing to /var/log/journal/99a3a42947714c09a6c9deea3b2baada is 56.485ms for 904 entries. Jan 28 01:21:09.494982 systemd-journald[1258]: System Journal (/var/log/journal/99a3a42947714c09a6c9deea3b2baada) is 11.8M, max 2.6G, 2.6G free. Jan 28 01:21:09.566945 systemd-journald[1258]: Received client request to flush runtime journal. Jan 28 01:21:09.566987 systemd-journald[1258]: /var/log/journal/99a3a42947714c09a6c9deea3b2baada/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 28 01:21:09.567008 systemd-journald[1258]: Rotating system journal. Jan 28 01:21:09.503191 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 01:21:09.510215 udevadm[1302]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 28 01:21:09.553146 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:21:09.556342 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 01:21:09.569237 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:21:09.575588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:21:09.606854 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:21:09.644208 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 28 01:21:09.644223 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 28 01:21:09.648371 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:21:09.658982 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:21:09.671851 kernel: loop1: detected capacity change from 0 to 114432 Jan 28 01:21:09.755248 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:21:09.765025 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:21:09.782041 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 28 01:21:09.782346 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 28 01:21:09.786322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:21:10.111868 kernel: loop2: detected capacity change from 0 to 31320 Jan 28 01:21:10.219265 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:21:10.232009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:21:10.250441 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jan 28 01:21:10.397914 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:21:10.415973 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:21:10.458013 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:21:10.520075 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:21:10.528211 kernel: loop3: detected capacity change from 0 to 114328 Jan 28 01:21:10.538407 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 28 01:21:10.553907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:21:10.583889 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:21:10.612079 kernel: hv_vmbus: registering driver hv_balloon Jan 28 01:21:10.612136 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 28 01:21:10.619662 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 28 01:21:10.652220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:21:10.660242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:21:10.660561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:21:10.675236 systemd-networkd[1339]: lo: Link UP Jan 28 01:21:10.675245 systemd-networkd[1339]: lo: Gained carrier Jan 28 01:21:10.687319 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1337) Jan 28 01:21:10.687377 kernel: hv_vmbus: registering driver hyperv_fb Jan 28 01:21:10.684841 systemd-networkd[1339]: Enumeration completed Jan 28 01:21:10.692284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:21:10.692474 systemd-networkd[1339]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:21:10.692477 systemd-networkd[1339]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:21:10.703071 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 28 01:21:10.703126 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 28 01:21:10.703053 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:21:10.709724 kernel: Console: switching to colour dummy device 80x25 Jan 28 01:21:10.717878 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:21:10.729041 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:21:10.779073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:21:10.779245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:21:10.784910 kernel: mlx5_core c8cc:00:02.0 enP51404s1: Link up Jan 28 01:21:10.787144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 01:21:10.799979 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:21:10.811896 kernel: hv_netvsc 002248bb-4041-0022-48bb-4041002248bb eth0: Data path switched to VF: enP51404s1 Jan 28 01:21:10.813018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:21:10.813287 systemd-networkd[1339]: enP51404s1: Link UP Jan 28 01:21:10.813379 systemd-networkd[1339]: eth0: Link UP Jan 28 01:21:10.813382 systemd-networkd[1339]: eth0: Gained carrier Jan 28 01:21:10.813411 systemd-networkd[1339]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:21:10.821124 systemd-networkd[1339]: enP51404s1: Gained carrier Jan 28 01:21:10.837978 systemd-networkd[1339]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:21:10.871161 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:21:10.938245 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 01:21:10.951937 kernel: loop4: detected capacity change from 0 to 200800 Jan 28 01:21:10.952180 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 01:21:10.973855 kernel: loop5: detected capacity change from 0 to 114432 Jan 28 01:21:10.985856 kernel: loop6: detected capacity change from 0 to 31320 Jan 28 01:21:11.005857 kernel: loop7: detected capacity change from 0 to 114328 Jan 28 01:21:11.029889 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:21:11.051323 (sd-merge)[1429]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 28 01:21:11.051736 (sd-merge)[1429]: Merged extensions into '/usr'. Jan 28 01:21:11.054470 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 01:21:11.061894 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:21:11.077047 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 01:21:11.080951 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:21:11.082993 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:21:11.083006 systemd[1]: Reloading... Jan 28 01:21:11.152872 zram_generator::config[1465]: No configuration found. Jan 28 01:21:11.274653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:21:11.348690 systemd[1]: Reloading finished in 265 ms. Jan 28 01:21:11.376611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:21:11.383972 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:21:11.390340 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 01:21:11.403970 systemd[1]: Starting ensure-sysext.service... Jan 28 01:21:11.411649 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:21:11.421823 systemd[1]: Reloading requested from client PID 1520 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:21:11.421858 systemd[1]: Reloading... Jan 28 01:21:11.453997 systemd-tmpfiles[1521]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:21:11.454257 systemd-tmpfiles[1521]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 01:21:11.454906 systemd-tmpfiles[1521]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 01:21:11.455125 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Jan 28 01:21:11.455173 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Jan 28 01:21:11.473271 systemd-tmpfiles[1521]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:21:11.473284 systemd-tmpfiles[1521]: Skipping /boot Jan 28 01:21:11.481233 systemd-tmpfiles[1521]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:21:11.481247 systemd-tmpfiles[1521]: Skipping /boot Jan 28 01:21:11.513864 zram_generator::config[1552]: No configuration found. Jan 28 01:21:11.615971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:21:11.690554 systemd[1]: Reloading finished in 268 ms. Jan 28 01:21:11.709220 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:21:11.730072 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:21:11.743778 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:21:11.749242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:21:11.753095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:21:11.763088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:21:11.771083 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:21:11.778043 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:21:11.780075 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:21:11.787253 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:21:11.801079 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:21:11.808365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:21:11.808506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:21:11.814499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:21:11.814625 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:21:11.820728 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:21:11.820949 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:21:11.830869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:21:11.840165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:21:11.849102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:21:11.867059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:21:11.873188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:21:11.874074 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:21:11.880157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:21:11.880298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:21:11.886060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:21:11.886189 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:21:11.892278 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:21:11.892403 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:21:11.904008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:21:11.912249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:21:11.918179 systemd-resolved[1621]: Positive Trust Anchors: Jan 28 01:21:11.918192 systemd-resolved[1621]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:21:11.918224 systemd-resolved[1621]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:21:11.921127 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:21:11.930120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:21:11.939114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:21:11.944308 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:21:11.944567 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:21:11.950070 augenrules[1644]: No rules Jan 28 01:21:11.957364 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:21:11.963692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:21:11.963859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:21:11.969460 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:21:11.969752 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:21:11.971531 systemd-resolved[1621]: Using system hostname 'ci-4081.3.6-n-6d8ceced70'. Jan 28 01:21:11.975255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:21:11.980924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:21:11.981083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:21:11.987787 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:21:11.988885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:21:11.998496 systemd[1]: Finished ensure-sysext.service. Jan 28 01:21:12.004371 systemd[1]: Reached target network.target - Network. Jan 28 01:21:12.008696 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:21:12.014573 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:21:12.014647 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:21:12.015031 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:21:12.393961 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:21:12.400161 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:21:12.500005 systemd-networkd[1339]: eth0: Gained IPv6LL Jan 28 01:21:12.502359 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:21:12.510288 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:21:15.569084 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:21:15.578049 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:21:15.586974 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:21:15.599344 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:21:15.604846 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:21:15.609646 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:21:15.615263 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:21:15.621066 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:21:15.625991 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:21:15.631927 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:21:15.637754 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:21:15.637783 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:21:15.641912 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:21:15.654418 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:21:15.661011 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:21:15.672414 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:21:15.677554 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:21:15.682409 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:21:15.686751 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:21:15.690848 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:21:15.690873 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:21:15.701945 systemd[1]: Starting chronyd.service - NTP client/server... Jan 28 01:21:15.708974 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:21:15.724775 (chronyd)[1665]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 28 01:21:15.734991 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 01:21:15.741707 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:21:15.749018 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:21:15.751045 chronyd[1673]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 28 01:21:15.754613 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:21:15.758944 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:21:15.758984 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 28 01:21:15.761041 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 28 01:21:15.767905 KVP[1675]: KVP starting; pid is:1675 Jan 28 01:21:15.770536 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 28 01:21:15.774758 chronyd[1673]: Timezone right/UTC failed leap second check, ignoring Jan 28 01:21:15.774950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:15.779475 chronyd[1673]: Loaded seccomp filter (level 2) Jan 28 01:21:15.782218 jq[1671]: false Jan 28 01:21:15.783769 KVP[1675]: KVP LIC Version: 3.1 Jan 28 01:21:15.784240 kernel: hv_utils: KVP IC version 4.0 Jan 28 01:21:15.786278 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:21:15.793033 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:21:15.801020 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:21:15.807021 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:21:15.815713 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:21:15.818200 extend-filesystems[1674]: Found loop4 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found loop5 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found loop6 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found loop7 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found sda Jan 28 01:21:15.828262 extend-filesystems[1674]: Found sda1 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found sda2 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found sda3 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found usr Jan 28 01:21:15.828262 extend-filesystems[1674]: Found sda4 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found sda6 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found sda7 Jan 28 01:21:15.828262 extend-filesystems[1674]: Found sda9 Jan 28 01:21:15.828262 extend-filesystems[1674]: Checking size of /dev/sda9 Jan 28 01:21:16.028423 coreos-metadata[1667]: Jan 28 01:21:15.977 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:21:16.028423 coreos-metadata[1667]: Jan 28 01:21:15.980 INFO Fetch successful Jan 28 01:21:16.028423 coreos-metadata[1667]: Jan 28 01:21:15.980 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 28 01:21:16.028423 coreos-metadata[1667]: Jan 28 01:21:16.022 INFO Fetch successful Jan 28 01:21:16.028423 coreos-metadata[1667]: Jan 28 01:21:16.023 INFO Fetching http://168.63.129.16/machine/ad669c92-fc28-4e49-b055-487e72e11ac7/c6f65f99%2Ddbf5%2D4c6c%2D9d4d%2Df32a876469cf.%5Fci%2D4081.3.6%2Dn%2D6d8ceced70?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 28 01:21:16.028423 coreos-metadata[1667]: Jan 28 01:21:16.025 INFO Fetch successful Jan 28 01:21:16.028423 coreos-metadata[1667]: Jan 28 01:21:16.025 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:21:15.858615 dbus-daemon[1668]: [system] SELinux support is enabled Jan 28 01:21:16.044393 extend-filesystems[1674]: Old size kept for /dev/sda9 Jan 28 01:21:16.044393 extend-filesystems[1674]: Found sr0 Jan 28 01:21:15.833029 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:21:16.065432 dbus-daemon[1668]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 01:21:16.074412 coreos-metadata[1667]: Jan 28 01:21:16.057 INFO Fetch successful Jan 28 01:21:15.839843 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:21:15.840317 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:21:15.843016 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:21:16.074721 update_engine[1691]: I20260128 01:21:15.967108 1691 main.cc:92] Flatcar Update Engine starting Jan 28 01:21:16.074721 update_engine[1691]: I20260128 01:21:15.979534 1691 update_check_scheduler.cc:74] Next update check in 6m50s Jan 28 01:21:15.871375 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:21:16.075007 jq[1697]: true Jan 28 01:21:15.888579 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:21:15.896319 systemd[1]: Started chronyd.service - NTP client/server. Jan 28 01:21:15.908421 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:21:15.908611 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:21:16.075423 jq[1725]: true Jan 28 01:21:15.908883 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:21:15.909109 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:21:15.937212 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:21:15.937380 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:21:15.975429 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:21:15.981707 systemd-logind[1690]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 28 01:21:15.983410 systemd-logind[1690]: New seat seat0. Jan 28 01:21:15.996232 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:21:16.031697 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:21:16.031904 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:21:16.063219 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:21:16.063258 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:21:16.070139 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 01:21:16.082480 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:21:16.082513 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:21:16.096805 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:21:16.103923 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1704) Jan 28 01:21:16.112130 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:21:16.136544 tar[1721]: linux-arm64/LICENSE Jan 28 01:21:16.136822 tar[1721]: linux-arm64/helm Jan 28 01:21:16.161758 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 01:21:16.171577 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:21:16.313583 bash[1786]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:21:16.316132 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:21:16.323141 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:21:16.566126 locksmithd[1750]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:21:16.754959 tar[1721]: linux-arm64/README.md Jan 28 01:21:16.768921 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:21:16.937045 sshd_keygen[1696]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:21:16.960894 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:21:16.968999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:16.977290 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:21:16.979136 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:21:16.986935 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 28 01:21:16.997472 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:21:16.997654 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:21:17.008903 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:21:17.020566 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:21:17.033189 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:21:17.040173 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 28 01:21:17.048405 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:21:17.055074 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 28 01:21:17.107290 containerd[1726]: time="2026-01-28T01:21:17.106489940Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 01:21:17.144656 containerd[1726]: time="2026-01-28T01:21:17.144604620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:21:17.146042 containerd[1726]: time="2026-01-28T01:21:17.145997100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:21:17.146042 containerd[1726]: time="2026-01-28T01:21:17.146038140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 01:21:17.146112 containerd[1726]: time="2026-01-28T01:21:17.146057660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 01:21:17.146864 containerd[1726]: time="2026-01-28T01:21:17.146319140Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 01:21:17.146864 containerd[1726]: time="2026-01-28T01:21:17.146345460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 01:21:17.146864 containerd[1726]: time="2026-01-28T01:21:17.146411860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:21:17.146864 containerd[1726]: time="2026-01-28T01:21:17.146427020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:21:17.146864 containerd[1726]: time="2026-01-28T01:21:17.146584020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:21:17.146864 containerd[1726]: time="2026-01-28T01:21:17.146598740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 01:21:17.146864 containerd[1726]: time="2026-01-28T01:21:17.146611620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:21:17.146864 containerd[1726]: time="2026-01-28T01:21:17.146621980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 01:21:17.146864 containerd[1726]: time="2026-01-28T01:21:17.146684900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:21:17.147059 containerd[1726]: time="2026-01-28T01:21:17.146870100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:21:17.147059 containerd[1726]: time="2026-01-28T01:21:17.146980740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:21:17.147059 containerd[1726]: time="2026-01-28T01:21:17.146994980Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 01:21:17.147129 containerd[1726]: time="2026-01-28T01:21:17.147069500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 01:21:17.147129 containerd[1726]: time="2026-01-28T01:21:17.147108220Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:21:17.168901 containerd[1726]: time="2026-01-28T01:21:17.168859660Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 01:21:17.169000 containerd[1726]: time="2026-01-28T01:21:17.168918300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 01:21:17.169000 containerd[1726]: time="2026-01-28T01:21:17.168934940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 01:21:17.169000 containerd[1726]: time="2026-01-28T01:21:17.168951420Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 01:21:17.169000 containerd[1726]: time="2026-01-28T01:21:17.168975100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 01:21:17.169145 containerd[1726]: time="2026-01-28T01:21:17.169127500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 01:21:17.169367 containerd[1726]: time="2026-01-28T01:21:17.169351860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 01:21:17.169489 containerd[1726]: time="2026-01-28T01:21:17.169449340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 01:21:17.169489 containerd[1726]: time="2026-01-28T01:21:17.169468660Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 01:21:17.169489 containerd[1726]: time="2026-01-28T01:21:17.169481260Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 01:21:17.169552 containerd[1726]: time="2026-01-28T01:21:17.169495380Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 01:21:17.169552 containerd[1726]: time="2026-01-28T01:21:17.169507900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 01:21:17.169552 containerd[1726]: time="2026-01-28T01:21:17.169520300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 01:21:17.169552 containerd[1726]: time="2026-01-28T01:21:17.169533140Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 01:21:17.169552 containerd[1726]: time="2026-01-28T01:21:17.169547620Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 01:21:17.169645 containerd[1726]: time="2026-01-28T01:21:17.169560380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 01:21:17.169645 containerd[1726]: time="2026-01-28T01:21:17.169572180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 01:21:17.169645 containerd[1726]: time="2026-01-28T01:21:17.169584180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 01:21:17.169645 containerd[1726]: time="2026-01-28T01:21:17.169604660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.169645 containerd[1726]: time="2026-01-28T01:21:17.169619020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.169645 containerd[1726]: time="2026-01-28T01:21:17.169636620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169654180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169666340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169681300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169692980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169706660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169719460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169733620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169745140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169758580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169770740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169786540Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169810260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169822100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170633 containerd[1726]: time="2026-01-28T01:21:17.169846020Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 01:21:17.170959 containerd[1726]: time="2026-01-28T01:21:17.169896140Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 01:21:17.170959 containerd[1726]: time="2026-01-28T01:21:17.169913020Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 01:21:17.170959 containerd[1726]: time="2026-01-28T01:21:17.169923500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 01:21:17.170959 containerd[1726]: time="2026-01-28T01:21:17.169935580Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 01:21:17.170959 containerd[1726]: time="2026-01-28T01:21:17.169944700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.170959 containerd[1726]: time="2026-01-28T01:21:17.169957620Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 01:21:17.170959 containerd[1726]: time="2026-01-28T01:21:17.169966940Z" level=info msg="NRI interface is disabled by configuration." Jan 28 01:21:17.170959 containerd[1726]: time="2026-01-28T01:21:17.169976580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 01:21:17.171109 containerd[1726]: time="2026-01-28T01:21:17.170245820Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 01:21:17.171109 containerd[1726]: time="2026-01-28T01:21:17.170311020Z" level=info msg="Connect containerd service" Jan 28 01:21:17.171109 containerd[1726]: time="2026-01-28T01:21:17.170346180Z" level=info msg="using legacy CRI server" Jan 28 01:21:17.171109 containerd[1726]: time="2026-01-28T01:21:17.170353340Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:21:17.171109 containerd[1726]: time="2026-01-28T01:21:17.170436220Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 01:21:17.171278 containerd[1726]: time="2026-01-28T01:21:17.171189100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:21:17.171849 containerd[1726]: time="2026-01-28T01:21:17.171362260Z" level=info msg="Start subscribing containerd event" Jan 28 01:21:17.172289 containerd[1726]: time="2026-01-28T01:21:17.172167460Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:21:17.172289 containerd[1726]: time="2026-01-28T01:21:17.172208140Z" level=info msg="Start recovering state" Jan 28 01:21:17.172289 containerd[1726]: time="2026-01-28T01:21:17.172233660Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:21:17.172406 containerd[1726]: time="2026-01-28T01:21:17.172320500Z" level=info msg="Start event monitor" Jan 28 01:21:17.172406 containerd[1726]: time="2026-01-28T01:21:17.172333140Z" level=info msg="Start snapshots syncer" Jan 28 01:21:17.172406 containerd[1726]: time="2026-01-28T01:21:17.172347700Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:21:17.172406 containerd[1726]: time="2026-01-28T01:21:17.172357460Z" level=info msg="Start streaming server" Jan 28 01:21:17.172789 containerd[1726]: time="2026-01-28T01:21:17.172695900Z" level=info msg="containerd successfully booted in 0.066947s" Jan 28 01:21:17.172797 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:21:17.179854 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:21:17.188888 systemd[1]: Startup finished in 629ms (kernel) + 12.517s (initrd) + 12.117s (userspace) = 25.264s. Jan 28 01:21:17.443327 kubelet[1812]: E0128 01:21:17.443240 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:21:17.445933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:21:17.446070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:21:17.618786 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:17.620236 login[1826]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:17.629086 systemd-logind[1690]: New session 2 of user core. Jan 28 01:21:17.629602 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:21:17.637050 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:21:17.639742 systemd-logind[1690]: New session 1 of user core. Jan 28 01:21:17.660250 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:21:17.666082 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:21:17.669810 (systemd)[1846]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 01:21:17.797316 systemd[1846]: Queued start job for default target default.target. Jan 28 01:21:17.803877 systemd[1846]: Created slice app.slice - User Application Slice. Jan 28 01:21:17.803903 systemd[1846]: Reached target paths.target - Paths. Jan 28 01:21:17.803916 systemd[1846]: Reached target timers.target - Timers. Jan 28 01:21:17.805030 systemd[1846]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:21:17.814764 systemd[1846]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:21:17.814819 systemd[1846]: Reached target sockets.target - Sockets. Jan 28 01:21:17.814831 systemd[1846]: Reached target basic.target - Basic System. Jan 28 01:21:17.814899 systemd[1846]: Reached target default.target - Main User Target. Jan 28 01:21:17.814925 systemd[1846]: Startup finished in 139ms. Jan 28 01:21:17.815195 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:21:17.825986 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:21:17.827407 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 01:21:19.116152 waagent[1828]: 2026-01-28T01:21:19.116062Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 28 01:21:19.120595 waagent[1828]: 2026-01-28T01:21:19.120542Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 28 01:21:19.124452 waagent[1828]: 2026-01-28T01:21:19.124410Z INFO Daemon Daemon Python: 3.11.9 Jan 28 01:21:19.130850 waagent[1828]: 2026-01-28T01:21:19.129895Z INFO Daemon Daemon Run daemon Jan 28 01:21:19.133381 waagent[1828]: 2026-01-28T01:21:19.133291Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 28 01:21:19.140657 waagent[1828]: 2026-01-28T01:21:19.140610Z INFO Daemon Daemon Using waagent for provisioning Jan 28 01:21:19.145278 waagent[1828]: 2026-01-28T01:21:19.145238Z INFO Daemon Daemon Activate resource disk Jan 28 01:21:19.148924 waagent[1828]: 2026-01-28T01:21:19.148887Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 28 01:21:19.158481 waagent[1828]: 2026-01-28T01:21:19.158434Z INFO Daemon Daemon Found device: None Jan 28 01:21:19.162252 waagent[1828]: 2026-01-28T01:21:19.162214Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 28 01:21:19.169002 waagent[1828]: 2026-01-28T01:21:19.168970Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 28 01:21:19.180112 waagent[1828]: 2026-01-28T01:21:19.180059Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 01:21:19.185383 waagent[1828]: 2026-01-28T01:21:19.185337Z INFO Daemon Daemon Running default provisioning handler Jan 28 01:21:19.196484 waagent[1828]: 2026-01-28T01:21:19.195866Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 28 01:21:19.207154 waagent[1828]: 2026-01-28T01:21:19.207103Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 28 01:21:19.215073 waagent[1828]: 2026-01-28T01:21:19.215032Z INFO Daemon Daemon cloud-init is enabled: False Jan 28 01:21:19.219114 waagent[1828]: 2026-01-28T01:21:19.219079Z INFO Daemon Daemon Copying ovf-env.xml Jan 28 01:21:19.331353 waagent[1828]: 2026-01-28T01:21:19.330731Z INFO Daemon Daemon Successfully mounted dvd Jan 28 01:21:19.359915 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 28 01:21:19.361956 waagent[1828]: 2026-01-28T01:21:19.361883Z INFO Daemon Daemon Detect protocol endpoint Jan 28 01:21:19.366147 waagent[1828]: 2026-01-28T01:21:19.366101Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 01:21:19.370907 waagent[1828]: 2026-01-28T01:21:19.370829Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 28 01:21:19.376292 waagent[1828]: 2026-01-28T01:21:19.376256Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 28 01:21:19.381027 waagent[1828]: 2026-01-28T01:21:19.380990Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 28 01:21:19.385295 waagent[1828]: 2026-01-28T01:21:19.385261Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 28 01:21:19.428651 waagent[1828]: 2026-01-28T01:21:19.428610Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 28 01:21:19.433925 waagent[1828]: 2026-01-28T01:21:19.433902Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 28 01:21:19.438101 waagent[1828]: 2026-01-28T01:21:19.438069Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 28 01:21:19.555636 waagent[1828]: 2026-01-28T01:21:19.555537Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 28 01:21:19.561157 waagent[1828]: 2026-01-28T01:21:19.561102Z INFO Daemon Daemon Forcing an update of the goal state. Jan 28 01:21:19.569140 waagent[1828]: 2026-01-28T01:21:19.569096Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 01:21:19.586865 waagent[1828]: 2026-01-28T01:21:19.586813Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 28 01:21:19.591681 waagent[1828]: 2026-01-28T01:21:19.591641Z INFO Daemon Jan 28 01:21:19.593905 waagent[1828]: 2026-01-28T01:21:19.593870Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: c4e52c5a-53c6-4d36-ace7-897750853036 eTag: 14043087609817076414 source: Fabric] Jan 28 01:21:19.603256 waagent[1828]: 2026-01-28T01:21:19.603218Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 28 01:21:19.608929 waagent[1828]: 2026-01-28T01:21:19.608889Z INFO Daemon Jan 28 01:21:19.611190 waagent[1828]: 2026-01-28T01:21:19.611156Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 28 01:21:19.621766 waagent[1828]: 2026-01-28T01:21:19.621705Z INFO Daemon Daemon Downloading artifacts profile blob Jan 28 01:21:19.694891 waagent[1828]: 2026-01-28T01:21:19.694287Z INFO Daemon Downloaded certificate {'thumbprint': 'A3918207A3866F88E02679A078E636ADB275E3C7', 'hasPrivateKey': True} Jan 28 01:21:19.702537 waagent[1828]: 2026-01-28T01:21:19.702491Z INFO Daemon Fetch goal state completed Jan 28 01:21:19.745032 waagent[1828]: 2026-01-28T01:21:19.744989Z INFO Daemon Daemon Starting provisioning Jan 28 01:21:19.749017 waagent[1828]: 2026-01-28T01:21:19.748963Z INFO Daemon Daemon Handle ovf-env.xml. Jan 28 01:21:19.752720 waagent[1828]: 2026-01-28T01:21:19.752679Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-6d8ceced70] Jan 28 01:21:19.759388 waagent[1828]: 2026-01-28T01:21:19.759337Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-6d8ceced70] Jan 28 01:21:19.764710 waagent[1828]: 2026-01-28T01:21:19.764665Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 28 01:21:19.770223 waagent[1828]: 2026-01-28T01:21:19.770163Z INFO Daemon Daemon Primary interface is [eth0] Jan 28 01:21:19.796782 systemd-networkd[1339]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:21:19.796789 systemd-networkd[1339]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:21:19.796815 systemd-networkd[1339]: eth0: DHCP lease lost Jan 28 01:21:19.802210 waagent[1828]: 2026-01-28T01:21:19.797854Z INFO Daemon Daemon Create user account if not exists Jan 28 01:21:19.802556 waagent[1828]: 2026-01-28T01:21:19.802507Z INFO Daemon Daemon User core already exists, skip useradd Jan 28 01:21:19.807131 waagent[1828]: 2026-01-28T01:21:19.807089Z INFO Daemon Daemon Configure sudoer Jan 28 01:21:19.810814 waagent[1828]: 2026-01-28T01:21:19.810758Z INFO Daemon Daemon Configure sshd Jan 28 01:21:19.811913 systemd-networkd[1339]: eth0: DHCPv6 lease lost Jan 28 01:21:19.814571 waagent[1828]: 2026-01-28T01:21:19.814498Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 28 01:21:19.825210 waagent[1828]: 2026-01-28T01:21:19.825158Z INFO Daemon Daemon Deploy ssh public key. Jan 28 01:21:19.842872 systemd-networkd[1339]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:21:20.916803 waagent[1828]: 2026-01-28T01:21:20.916743Z INFO Daemon Daemon Provisioning complete Jan 28 01:21:20.933625 waagent[1828]: 2026-01-28T01:21:20.933579Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 28 01:21:20.938747 waagent[1828]: 2026-01-28T01:21:20.938703Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 28 01:21:20.946700 waagent[1828]: 2026-01-28T01:21:20.946660Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 28 01:21:21.072575 waagent[1898]: 2026-01-28T01:21:21.072497Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 28 01:21:21.073502 waagent[1898]: 2026-01-28T01:21:21.073009Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 28 01:21:21.073502 waagent[1898]: 2026-01-28T01:21:21.073082Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 28 01:21:21.418682 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#173 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:21:21.419088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#174 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 28 01:21:21.636853 waagent[1898]: 2026-01-28T01:21:21.635980Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 28 01:21:21.636853 waagent[1898]: 2026-01-28T01:21:21.636224Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:21:21.636853 waagent[1898]: 2026-01-28T01:21:21.636287Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:21:21.645167 waagent[1898]: 2026-01-28T01:21:21.645107Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 01:21:21.650799 waagent[1898]: 2026-01-28T01:21:21.650759Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 28 01:21:21.651356 waagent[1898]: 2026-01-28T01:21:21.651318Z INFO ExtHandler Jan 28 01:21:21.651518 waagent[1898]: 2026-01-28T01:21:21.651485Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4bd7d3c2-d063-4435-8f6e-92441a575d98 eTag: 14043087609817076414 source: Fabric] Jan 28 01:21:21.651902 waagent[1898]: 2026-01-28T01:21:21.651863Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 28 01:21:21.668796 waagent[1898]: 2026-01-28T01:21:21.667912Z INFO ExtHandler Jan 28 01:21:21.668796 waagent[1898]: 2026-01-28T01:21:21.668038Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 28 01:21:21.672853 waagent[1898]: 2026-01-28T01:21:21.672170Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 28 01:21:21.965367 waagent[1898]: 2026-01-28T01:21:21.965231Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A3918207A3866F88E02679A078E636ADB275E3C7', 'hasPrivateKey': True} Jan 28 01:21:21.965902 waagent[1898]: 2026-01-28T01:21:21.965857Z INFO ExtHandler Fetch goal state completed Jan 28 01:21:21.980797 waagent[1898]: 2026-01-28T01:21:21.980747Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1898 Jan 28 01:21:21.980962 waagent[1898]: 2026-01-28T01:21:21.980930Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 28 01:21:21.982542 waagent[1898]: 2026-01-28T01:21:21.982502Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 28 01:21:21.982914 waagent[1898]: 2026-01-28T01:21:21.982877Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 28 01:21:22.057860 waagent[1898]: 2026-01-28T01:21:22.057739Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 28 01:21:22.058002 waagent[1898]: 2026-01-28T01:21:22.057962Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 28 01:21:22.064412 waagent[1898]: 2026-01-28T01:21:22.063935Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 28 01:21:22.070641 systemd[1]: Reloading requested from client PID 1919 ('systemctl') (unit waagent.service)... Jan 28 01:21:22.070653 systemd[1]: Reloading... Jan 28 01:21:22.139903 zram_generator::config[1953]: No configuration found. Jan 28 01:21:22.234745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:21:22.309866 systemd[1]: Reloading finished in 238 ms. Jan 28 01:21:22.329315 waagent[1898]: 2026-01-28T01:21:22.328967Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 28 01:21:22.335451 systemd[1]: Reloading requested from client PID 2007 ('systemctl') (unit waagent.service)... Jan 28 01:21:22.335463 systemd[1]: Reloading... Jan 28 01:21:22.414901 zram_generator::config[2044]: No configuration found. Jan 28 01:21:22.509041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:21:22.583776 systemd[1]: Reloading finished in 248 ms. Jan 28 01:21:22.609128 waagent[1898]: 2026-01-28T01:21:22.608430Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 28 01:21:22.609128 waagent[1898]: 2026-01-28T01:21:22.608584Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 28 01:21:22.994088 waagent[1898]: 2026-01-28T01:21:22.993968Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 28 01:21:22.994595 waagent[1898]: 2026-01-28T01:21:22.994551Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 28 01:21:22.995366 waagent[1898]: 2026-01-28T01:21:22.995291Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 28 01:21:22.995733 waagent[1898]: 2026-01-28T01:21:22.995652Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 28 01:21:22.996046 waagent[1898]: 2026-01-28T01:21:22.995945Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 28 01:21:22.996130 waagent[1898]: 2026-01-28T01:21:22.996038Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 28 01:21:22.996532 waagent[1898]: 2026-01-28T01:21:22.996428Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 28 01:21:22.996625 waagent[1898]: 2026-01-28T01:21:22.996528Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 28 01:21:22.997205 waagent[1898]: 2026-01-28T01:21:22.997043Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:21:22.997205 waagent[1898]: 2026-01-28T01:21:22.997161Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 28 01:21:22.997860 waagent[1898]: 2026-01-28T01:21:22.997535Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:21:22.997860 waagent[1898]: 2026-01-28T01:21:22.997627Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:21:22.997950 waagent[1898]: 2026-01-28T01:21:22.997856Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 28 01:21:23.000079 waagent[1898]: 2026-01-28T01:21:23.000029Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:21:23.001686 waagent[1898]: 2026-01-28T01:21:23.000987Z INFO EnvHandler ExtHandler Configure routes Jan 28 01:21:23.001778 waagent[1898]: 2026-01-28T01:21:23.001653Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 28 01:21:23.001778 waagent[1898]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 28 01:21:23.001778 waagent[1898]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 28 01:21:23.001778 waagent[1898]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 28 01:21:23.001778 waagent[1898]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:21:23.001778 waagent[1898]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:21:23.001778 waagent[1898]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:21:23.002686 waagent[1898]: 2026-01-28T01:21:23.002556Z INFO EnvHandler ExtHandler Gateway:None Jan 28 01:21:23.003524 waagent[1898]: 2026-01-28T01:21:23.003344Z INFO EnvHandler ExtHandler Routes:None Jan 28 01:21:23.004869 waagent[1898]: 2026-01-28T01:21:23.004801Z INFO ExtHandler ExtHandler Jan 28 01:21:23.005205 waagent[1898]: 2026-01-28T01:21:23.005158Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2fb8edfe-f6c2-4b35-891a-7d403bf8a553 correlation 9090bbe2-38a0-4c84-89b5-26b4bfa77983 created: 2026-01-28T01:20:23.893219Z] Jan 28 01:21:23.006440 waagent[1898]: 2026-01-28T01:21:23.006362Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 28 01:21:23.007857 waagent[1898]: 2026-01-28T01:21:23.007479Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 28 01:21:23.037888 waagent[1898]: 2026-01-28T01:21:23.037768Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9DB09950-EB1A-4F5A-B6BB-A51D2F26C675;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 28 01:21:23.059788 waagent[1898]: 2026-01-28T01:21:23.059718Z INFO MonitorHandler ExtHandler Network interfaces: Jan 28 01:21:23.059788 waagent[1898]: Executing ['ip', '-a', '-o', 'link']: Jan 28 01:21:23.059788 waagent[1898]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 28 01:21:23.059788 waagent[1898]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:40:41 brd ff:ff:ff:ff:ff:ff Jan 28 01:21:23.059788 waagent[1898]: 3: enP51404s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:40:41 brd ff:ff:ff:ff:ff:ff\ altname enP51404p0s2 Jan 28 01:21:23.059788 waagent[1898]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 28 01:21:23.059788 waagent[1898]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 28 01:21:23.059788 waagent[1898]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 28 01:21:23.059788 waagent[1898]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 28 01:21:23.059788 waagent[1898]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 28 01:21:23.059788 waagent[1898]: 2: eth0 inet6 fe80::222:48ff:febb:4041/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 28 01:21:23.123975 waagent[1898]: 2026-01-28T01:21:23.123845Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 28 01:21:23.123975 waagent[1898]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:21:23.123975 waagent[1898]: pkts bytes target prot opt in out source destination Jan 28 01:21:23.123975 waagent[1898]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:21:23.123975 waagent[1898]: pkts bytes target prot opt in out source destination Jan 28 01:21:23.123975 waagent[1898]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:21:23.123975 waagent[1898]: pkts bytes target prot opt in out source destination Jan 28 01:21:23.123975 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 01:21:23.123975 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 01:21:23.123975 waagent[1898]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 01:21:23.126653 waagent[1898]: 2026-01-28T01:21:23.126600Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 28 01:21:23.126653 waagent[1898]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:21:23.126653 waagent[1898]: pkts bytes target prot opt in out source destination Jan 28 01:21:23.126653 waagent[1898]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:21:23.126653 waagent[1898]: pkts bytes target prot opt in out source destination Jan 28 01:21:23.126653 waagent[1898]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:21:23.126653 waagent[1898]: pkts bytes target prot opt in out source destination Jan 28 01:21:23.126653 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 01:21:23.126653 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 01:21:23.126653 waagent[1898]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 01:21:23.126895 waagent[1898]: 2026-01-28T01:21:23.126861Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 28 01:21:27.696648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:21:27.704002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:27.832801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:27.836743 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:21:27.931537 kubelet[2134]: E0128 01:21:27.931463 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:21:27.934734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:21:27.934913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:21:38.185350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:21:38.194062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:38.519642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:38.531053 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:21:38.560535 kubelet[2149]: E0128 01:21:38.560466 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:21:38.563508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:21:38.563644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:21:39.565234 chronyd[1673]: Selected source PHC0 Jan 28 01:21:40.442598 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:21:40.443616 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:38324.service - OpenSSH per-connection server daemon (10.200.16.10:38324). Jan 28 01:21:40.989584 sshd[2156]: Accepted publickey for core from 10.200.16.10 port 38324 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:21:40.990884 sshd[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:40.994357 systemd-logind[1690]: New session 3 of user core. Jan 28 01:21:41.001973 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:21:41.416075 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:38336.service - OpenSSH per-connection server daemon (10.200.16.10:38336). Jan 28 01:21:41.862041 sshd[2161]: Accepted publickey for core from 10.200.16.10 port 38336 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:21:41.863199 sshd[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:41.866904 systemd-logind[1690]: New session 4 of user core. Jan 28 01:21:41.872953 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:21:42.194303 sshd[2161]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:42.197695 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:38336.service: Deactivated successfully. Jan 28 01:21:42.199171 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:21:42.199754 systemd-logind[1690]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:21:42.200522 systemd-logind[1690]: Removed session 4. Jan 28 01:21:42.280142 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:38342.service - OpenSSH per-connection server daemon (10.200.16.10:38342). Jan 28 01:21:42.762442 sshd[2168]: Accepted publickey for core from 10.200.16.10 port 38342 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:21:42.763713 sshd[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:42.768274 systemd-logind[1690]: New session 5 of user core. Jan 28 01:21:42.774990 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:21:43.107906 sshd[2168]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:43.111291 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:38342.service: Deactivated successfully. Jan 28 01:21:43.112669 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:21:43.114450 systemd-logind[1690]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:21:43.115504 systemd-logind[1690]: Removed session 5. Jan 28 01:21:43.190078 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:38352.service - OpenSSH per-connection server daemon (10.200.16.10:38352). Jan 28 01:21:43.639287 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 38352 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:21:43.640542 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:43.644410 systemd-logind[1690]: New session 6 of user core. Jan 28 01:21:43.651027 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:21:43.978151 sshd[2175]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:43.981691 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:38352.service: Deactivated successfully. Jan 28 01:21:43.983599 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:21:43.984216 systemd-logind[1690]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:21:43.985093 systemd-logind[1690]: Removed session 6. Jan 28 01:21:44.058320 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:38360.service - OpenSSH per-connection server daemon (10.200.16.10:38360). Jan 28 01:21:44.510848 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 38360 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:21:44.512204 sshd[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:44.515764 systemd-logind[1690]: New session 7 of user core. Jan 28 01:21:44.523965 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:21:44.907389 sudo[2185]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:21:44.907663 sudo[2185]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:21:44.934987 sudo[2185]: pam_unix(sudo:session): session closed for user root Jan 28 01:21:45.016243 sshd[2182]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:45.019047 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:38360.service: Deactivated successfully. Jan 28 01:21:45.020543 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:21:45.021765 systemd-logind[1690]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:21:45.022663 systemd-logind[1690]: Removed session 7. Jan 28 01:21:45.106601 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:38368.service - OpenSSH per-connection server daemon (10.200.16.10:38368). Jan 28 01:21:45.591218 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 38368 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:21:45.593365 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:45.596982 systemd-logind[1690]: New session 8 of user core. Jan 28 01:21:45.603963 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:21:45.865566 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:21:45.866394 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:21:45.869455 sudo[2194]: pam_unix(sudo:session): session closed for user root Jan 28 01:21:45.874028 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 01:21:45.874284 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:21:45.886490 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 01:21:45.887244 auditctl[2197]: No rules Jan 28 01:21:45.887730 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:21:45.887905 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 01:21:45.890669 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:21:45.912911 augenrules[2215]: No rules Jan 28 01:21:45.914305 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:21:45.917031 sudo[2193]: pam_unix(sudo:session): session closed for user root Jan 28 01:21:45.994526 sshd[2190]: pam_unix(sshd:session): session closed for user core Jan 28 01:21:45.997872 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:38368.service: Deactivated successfully. Jan 28 01:21:45.999309 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:21:46.001308 systemd-logind[1690]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:21:46.002226 systemd-logind[1690]: Removed session 8. Jan 28 01:21:46.082381 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:38374.service - OpenSSH per-connection server daemon (10.200.16.10:38374). Jan 28 01:21:46.568200 sshd[2223]: Accepted publickey for core from 10.200.16.10 port 38374 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:21:46.569485 sshd[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:21:46.572985 systemd-logind[1690]: New session 9 of user core. Jan 28 01:21:46.579981 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:21:46.842668 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:21:46.842954 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:21:47.714096 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:21:47.714220 (dockerd)[2241]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:21:48.153050 dockerd[2241]: time="2026-01-28T01:21:48.152936319Z" level=info msg="Starting up" Jan 28 01:21:48.428502 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4142943979-merged.mount: Deactivated successfully. Jan 28 01:21:48.475555 dockerd[2241]: time="2026-01-28T01:21:48.475515983Z" level=info msg="Loading containers: start." Jan 28 01:21:48.680879 kernel: Initializing XFRM netlink socket Jan 28 01:21:48.701376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:21:48.706006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:48.807987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:48.811367 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:21:48.923060 kubelet[2317]: E0128 01:21:48.923005 2317 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:21:48.925711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:21:48.925991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:21:49.300180 systemd-networkd[1339]: docker0: Link UP Jan 28 01:21:49.319435 dockerd[2241]: time="2026-01-28T01:21:49.318908427Z" level=info msg="Loading containers: done." Jan 28 01:21:49.385992 dockerd[2241]: time="2026-01-28T01:21:49.385883347Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:21:49.386248 dockerd[2241]: time="2026-01-28T01:21:49.386232628Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 01:21:49.386423 dockerd[2241]: time="2026-01-28T01:21:49.386408108Z" level=info msg="Daemon has completed initialization" Jan 28 01:21:49.441651 dockerd[2241]: time="2026-01-28T01:21:49.441586054Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:21:49.442121 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:21:50.127104 containerd[1726]: time="2026-01-28T01:21:50.127033430Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 28 01:21:50.937875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756488757.mount: Deactivated successfully. Jan 28 01:21:52.180887 containerd[1726]: time="2026-01-28T01:21:52.180828400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:52.183548 containerd[1726]: time="2026-01-28T01:21:52.183522839Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 28 01:21:52.188556 containerd[1726]: time="2026-01-28T01:21:52.188517638Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:52.195346 containerd[1726]: time="2026-01-28T01:21:52.195296877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:52.196607 containerd[1726]: time="2026-01-28T01:21:52.196285996Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.069181486s" Jan 28 01:21:52.196607 containerd[1726]: time="2026-01-28T01:21:52.196320756Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 28 01:21:52.197593 containerd[1726]: time="2026-01-28T01:21:52.197390036Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 28 01:21:53.397412 containerd[1726]: time="2026-01-28T01:21:53.397361232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:53.399617 containerd[1726]: time="2026-01-28T01:21:53.399588551Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 28 01:21:53.402514 containerd[1726]: time="2026-01-28T01:21:53.402466951Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:53.409185 containerd[1726]: time="2026-01-28T01:21:53.409139269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:53.410175 containerd[1726]: time="2026-01-28T01:21:53.410144589Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.212722273s" Jan 28 01:21:53.410274 containerd[1726]: time="2026-01-28T01:21:53.410257349Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 28 01:21:53.410751 containerd[1726]: time="2026-01-28T01:21:53.410716109Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 28 01:21:54.506078 containerd[1726]: time="2026-01-28T01:21:54.506032766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:54.511741 containerd[1726]: time="2026-01-28T01:21:54.511700125Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 28 01:21:54.516138 containerd[1726]: time="2026-01-28T01:21:54.516094564Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:54.520358 containerd[1726]: time="2026-01-28T01:21:54.520310403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:54.521455 containerd[1726]: time="2026-01-28T01:21:54.521333243Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.110453254s" Jan 28 01:21:54.521455 containerd[1726]: time="2026-01-28T01:21:54.521366323Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 28 01:21:54.522650 containerd[1726]: time="2026-01-28T01:21:54.522495522Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 28 01:21:55.480655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2935514843.mount: Deactivated successfully. Jan 28 01:21:55.724566 containerd[1726]: time="2026-01-28T01:21:55.723867850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:55.726460 containerd[1726]: time="2026-01-28T01:21:55.726428170Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 28 01:21:55.729259 containerd[1726]: time="2026-01-28T01:21:55.729213730Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:55.737319 containerd[1726]: time="2026-01-28T01:21:55.737184169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:55.738032 containerd[1726]: time="2026-01-28T01:21:55.737821409Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.215293527s" Jan 28 01:21:55.738032 containerd[1726]: time="2026-01-28T01:21:55.737872889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 28 01:21:55.738422 containerd[1726]: time="2026-01-28T01:21:55.738394289Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 28 01:21:56.406532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1602309973.mount: Deactivated successfully. Jan 28 01:21:57.704877 containerd[1726]: time="2026-01-28T01:21:57.704212947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:57.706642 containerd[1726]: time="2026-01-28T01:21:57.706610387Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 28 01:21:57.709371 containerd[1726]: time="2026-01-28T01:21:57.709329906Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:57.715268 containerd[1726]: time="2026-01-28T01:21:57.715231266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:57.716163 containerd[1726]: time="2026-01-28T01:21:57.716014426Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.977585497s" Jan 28 01:21:57.716163 containerd[1726]: time="2026-01-28T01:21:57.716044626Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 28 01:21:57.716480 containerd[1726]: time="2026-01-28T01:21:57.716456945Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 28 01:21:58.306800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796812607.mount: Deactivated successfully. Jan 28 01:21:58.322417 containerd[1726]: time="2026-01-28T01:21:58.322372277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:58.325421 containerd[1726]: time="2026-01-28T01:21:58.325257357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 28 01:21:58.329854 containerd[1726]: time="2026-01-28T01:21:58.328572916Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:58.332301 containerd[1726]: time="2026-01-28T01:21:58.332272036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:21:58.333077 containerd[1726]: time="2026-01-28T01:21:58.333047596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 616.561011ms" Jan 28 01:21:58.333174 containerd[1726]: time="2026-01-28T01:21:58.333159356Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 28 01:21:58.333883 containerd[1726]: time="2026-01-28T01:21:58.333846596Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 28 01:21:58.758290 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 28 01:21:58.951348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:21:58.957011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:21:58.993007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945295491.mount: Deactivated successfully. Jan 28 01:21:59.301169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:21:59.305396 (kubelet)[2533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:21:59.337257 kubelet[2533]: E0128 01:21:59.337213 2533 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:21:59.339964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:21:59.340310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:22:00.938633 update_engine[1691]: I20260128 01:22:00.938026 1691 update_attempter.cc:509] Updating boot flags... Jan 28 01:22:00.999929 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2571) Jan 28 01:22:01.086089 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2575) Jan 28 01:22:03.642514 containerd[1726]: time="2026-01-28T01:22:03.642448674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:03.645645 containerd[1726]: time="2026-01-28T01:22:03.645616713Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 28 01:22:03.648420 containerd[1726]: time="2026-01-28T01:22:03.648394193Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:03.652696 containerd[1726]: time="2026-01-28T01:22:03.652650112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:03.654855 containerd[1726]: time="2026-01-28T01:22:03.653757632Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 5.319775956s" Jan 28 01:22:03.654855 containerd[1726]: time="2026-01-28T01:22:03.653791552Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 28 01:22:08.619971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:22:08.633066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:22:08.656525 systemd[1]: Reloading requested from client PID 2683 ('systemctl') (unit session-9.scope)... Jan 28 01:22:08.656541 systemd[1]: Reloading... Jan 28 01:22:08.749889 zram_generator::config[2723]: No configuration found. Jan 28 01:22:08.853562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:22:08.932012 systemd[1]: Reloading finished in 275 ms. Jan 28 01:22:08.983905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:22:08.987365 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:22:08.987586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:22:08.989093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:22:09.132388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:22:09.137003 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:22:09.169349 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:22:09.169349 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:22:09.261800 kubelet[2792]: I0128 01:22:09.261723 2792 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:22:10.053681 kubelet[2792]: I0128 01:22:10.053604 2792 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 01:22:10.053681 kubelet[2792]: I0128 01:22:10.053675 2792 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:22:10.054943 kubelet[2792]: I0128 01:22:10.054924 2792 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 01:22:10.054943 kubelet[2792]: I0128 01:22:10.054943 2792 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:22:10.055182 kubelet[2792]: I0128 01:22:10.055167 2792 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:22:10.062849 kubelet[2792]: E0128 01:22:10.062780 2792 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:22:10.063572 kubelet[2792]: I0128 01:22:10.063475 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:22:10.066553 kubelet[2792]: E0128 01:22:10.066518 2792 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:22:10.066709 kubelet[2792]: I0128 01:22:10.066681 2792 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 28 01:22:10.071313 kubelet[2792]: I0128 01:22:10.070556 2792 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 01:22:10.071313 kubelet[2792]: I0128 01:22:10.070769 2792 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:22:10.071313 kubelet[2792]: I0128 01:22:10.070797 2792 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-6d8ceced70","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:22:10.071313 kubelet[2792]: I0128 01:22:10.071023 2792 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:22:10.071510 kubelet[2792]: I0128 01:22:10.071035 2792 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 01:22:10.071510 kubelet[2792]: I0128 01:22:10.071129 2792 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 01:22:10.075432 kubelet[2792]: I0128 01:22:10.075413 2792 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:22:10.076782 kubelet[2792]: I0128 01:22:10.076676 2792 kubelet.go:475] "Attempting to sync node with API server" Jan 28 01:22:10.076782 kubelet[2792]: I0128 01:22:10.076697 2792 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:22:10.076782 kubelet[2792]: I0128 01:22:10.076722 2792 kubelet.go:387] "Adding apiserver pod source" Jan 28 01:22:10.076782 kubelet[2792]: I0128 01:22:10.076732 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:22:10.077595 kubelet[2792]: E0128 01:22:10.077177 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-6d8ceced70&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:22:10.077595 kubelet[2792]: E0128 01:22:10.077562 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:22:10.079858 kubelet[2792]: I0128 01:22:10.079176 2792 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:22:10.079858 kubelet[2792]: I0128 01:22:10.079718 2792 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:22:10.079858 kubelet[2792]: I0128 01:22:10.079745 2792 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 01:22:10.079858 kubelet[2792]: W0128 01:22:10.079782 2792 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:22:10.083111 kubelet[2792]: I0128 01:22:10.083097 2792 server.go:1262] "Started kubelet" Jan 28 01:22:10.084443 kubelet[2792]: I0128 01:22:10.084420 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:22:10.085061 kubelet[2792]: I0128 01:22:10.085032 2792 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:22:10.086161 kubelet[2792]: I0128 01:22:10.086096 2792 server.go:310] "Adding debug handlers to kubelet server" Jan 28 01:22:10.088745 kubelet[2792]: I0128 01:22:10.086591 2792 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:22:10.088806 kubelet[2792]: I0128 01:22:10.088764 2792 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 01:22:10.088992 kubelet[2792]: I0128 01:22:10.088976 2792 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:22:10.089032 kubelet[2792]: E0128 01:22:10.086183 2792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-6d8ceced70.188ec07921289ba9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-6d8ceced70,UID:ci-4081.3.6-n-6d8ceced70,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-6d8ceced70,},FirstTimestamp:2026-01-28 01:22:10.083068841 +0000 UTC m=+0.943181752,LastTimestamp:2026-01-28 01:22:10.083068841 +0000 UTC m=+0.943181752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-6d8ceced70,}" Jan 28 01:22:10.090643 kubelet[2792]: I0128 01:22:10.090618 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:22:10.091688 kubelet[2792]: I0128 01:22:10.091673 2792 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 01:22:10.092696 kubelet[2792]: E0128 01:22:10.092072 2792 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-6d8ceced70\" not found" Jan 28 01:22:10.092696 kubelet[2792]: I0128 01:22:10.092596 2792 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 01:22:10.092696 kubelet[2792]: I0128 01:22:10.092637 2792 reconciler.go:29] "Reconciler: start to sync state" Jan 28 01:22:10.093188 kubelet[2792]: E0128 01:22:10.093163 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:22:10.093249 kubelet[2792]: E0128 01:22:10.093225 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-6d8ceced70?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Jan 28 01:22:10.096236 kubelet[2792]: I0128 01:22:10.096213 2792 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:22:10.096236 kubelet[2792]: I0128 01:22:10.096230 2792 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:22:10.096326 kubelet[2792]: I0128 01:22:10.096296 2792 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:22:10.108248 kubelet[2792]: I0128 01:22:10.108210 2792 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 01:22:10.109254 kubelet[2792]: I0128 01:22:10.109237 2792 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 01:22:10.109337 kubelet[2792]: I0128 01:22:10.109329 2792 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 01:22:10.109408 kubelet[2792]: I0128 01:22:10.109401 2792 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 01:22:10.109502 kubelet[2792]: E0128 01:22:10.109486 2792 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:22:10.114620 kubelet[2792]: E0128 01:22:10.114595 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:22:10.117097 kubelet[2792]: E0128 01:22:10.117081 2792 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:22:10.193133 kubelet[2792]: E0128 01:22:10.193098 2792 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-6d8ceced70\" not found" Jan 28 01:22:10.209654 kubelet[2792]: E0128 01:22:10.209617 2792 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:22:10.221024 kubelet[2792]: I0128 01:22:10.220975 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:22:10.221024 kubelet[2792]: I0128 01:22:10.220989 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:22:10.221024 kubelet[2792]: I0128 01:22:10.221007 2792 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:22:10.225177 kubelet[2792]: I0128 01:22:10.225149 2792 policy_none.go:49] "None policy: Start" Jan 28 01:22:10.225177 kubelet[2792]: I0128 01:22:10.225173 2792 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 01:22:10.225282 kubelet[2792]: I0128 01:22:10.225188 2792 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 01:22:10.229235 kubelet[2792]: I0128 01:22:10.229216 2792 policy_none.go:47] "Start" Jan 28 01:22:10.233049 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:22:10.244793 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:22:10.247892 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:22:10.256574 kubelet[2792]: E0128 01:22:10.256551 2792 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:22:10.256970 kubelet[2792]: I0128 01:22:10.256895 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:22:10.256970 kubelet[2792]: I0128 01:22:10.256911 2792 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:22:10.257327 kubelet[2792]: I0128 01:22:10.257315 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:22:10.258495 kubelet[2792]: E0128 01:22:10.258471 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:22:10.258588 kubelet[2792]: E0128 01:22:10.258509 2792 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-6d8ceced70\" not found" Jan 28 01:22:10.294424 kubelet[2792]: E0128 01:22:10.294374 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-6d8ceced70?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Jan 28 01:22:10.358968 kubelet[2792]: I0128 01:22:10.358878 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.360341 kubelet[2792]: E0128 01:22:10.360275 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.428039 systemd[1]: Created slice kubepods-burstable-pod734fa22b64dc2e96fcfa74b1107b48a1.slice - libcontainer container kubepods-burstable-pod734fa22b64dc2e96fcfa74b1107b48a1.slice. Jan 28 01:22:10.446148 kubelet[2792]: E0128 01:22:10.446116 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d8ceced70\" not found" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.450704 systemd[1]: Created slice kubepods-burstable-pod13eb60e2321217e61a6037188b76e18d.slice - libcontainer container kubepods-burstable-pod13eb60e2321217e61a6037188b76e18d.slice. Jan 28 01:22:10.452098 kubelet[2792]: E0128 01:22:10.452075 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d8ceced70\" not found" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.463163 systemd[1]: Created slice kubepods-burstable-podd917e9f18c3d7e4ac1824b51df8a640b.slice - libcontainer container kubepods-burstable-podd917e9f18c3d7e4ac1824b51df8a640b.slice. Jan 28 01:22:10.464788 kubelet[2792]: E0128 01:22:10.464765 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d8ceced70\" not found" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.495077 kubelet[2792]: I0128 01:22:10.495046 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.495215 kubelet[2792]: I0128 01:22:10.495202 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/734fa22b64dc2e96fcfa74b1107b48a1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" (UID: \"734fa22b64dc2e96fcfa74b1107b48a1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.495468 kubelet[2792]: I0128 01:22:10.495293 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.495468 kubelet[2792]: I0128 01:22:10.495337 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.495468 kubelet[2792]: I0128 01:22:10.495353 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.495468 kubelet[2792]: I0128 01:22:10.495377 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d917e9f18c3d7e4ac1824b51df8a640b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-6d8ceced70\" (UID: \"d917e9f18c3d7e4ac1824b51df8a640b\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.495468 kubelet[2792]: I0128 01:22:10.495393 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/734fa22b64dc2e96fcfa74b1107b48a1-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" (UID: \"734fa22b64dc2e96fcfa74b1107b48a1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.495603 kubelet[2792]: I0128 01:22:10.495408 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/734fa22b64dc2e96fcfa74b1107b48a1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" (UID: \"734fa22b64dc2e96fcfa74b1107b48a1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.495603 kubelet[2792]: I0128 01:22:10.495431 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.562724 kubelet[2792]: I0128 01:22:10.562363 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.562724 kubelet[2792]: E0128 01:22:10.562657 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.695385 kubelet[2792]: E0128 01:22:10.695289 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-6d8ceced70?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Jan 28 01:22:10.950691 kubelet[2792]: E0128 01:22:10.950590 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:22:10.964163 kubelet[2792]: I0128 01:22:10.964136 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:10.964429 kubelet[2792]: E0128 01:22:10.964409 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:11.603003 kubelet[2792]: E0128 01:22:11.419479 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-6d8ceced70&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:22:11.603003 kubelet[2792]: E0128 01:22:11.497080 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-6d8ceced70?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Jan 28 01:22:11.603003 kubelet[2792]: E0128 01:22:11.580880 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:22:11.654280 containerd[1726]: time="2026-01-28T01:22:11.654022359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-6d8ceced70,Uid:734fa22b64dc2e96fcfa74b1107b48a1,Namespace:kube-system,Attempt:0,}" Jan 28 01:22:11.660026 containerd[1726]: time="2026-01-28T01:22:11.659917798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-6d8ceced70,Uid:13eb60e2321217e61a6037188b76e18d,Namespace:kube-system,Attempt:0,}" Jan 28 01:22:11.664075 containerd[1726]: time="2026-01-28T01:22:11.664046237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-6d8ceced70,Uid:d917e9f18c3d7e4ac1824b51df8a640b,Namespace:kube-system,Attempt:0,}" Jan 28 01:22:11.669762 kubelet[2792]: E0128 01:22:11.669730 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:22:11.766680 kubelet[2792]: I0128 01:22:11.766652 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:11.767000 kubelet[2792]: E0128 01:22:11.766968 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:12.077701 kubelet[2792]: E0128 01:22:12.077555 2792 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:22:12.269772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298184148.mount: Deactivated successfully. Jan 28 01:22:12.288196 containerd[1726]: time="2026-01-28T01:22:12.288135637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:22:12.297323 containerd[1726]: time="2026-01-28T01:22:12.297284835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 28 01:22:12.300692 containerd[1726]: time="2026-01-28T01:22:12.299976034Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:22:12.303391 containerd[1726]: time="2026-01-28T01:22:12.303352554Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:22:12.305858 containerd[1726]: time="2026-01-28T01:22:12.305730033Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:22:12.308621 containerd[1726]: time="2026-01-28T01:22:12.308522913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:22:12.311081 containerd[1726]: time="2026-01-28T01:22:12.311046832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:22:12.314147 containerd[1726]: time="2026-01-28T01:22:12.314108472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:22:12.315668 containerd[1726]: time="2026-01-28T01:22:12.315156031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 660.805512ms" Jan 28 01:22:12.328145 containerd[1726]: time="2026-01-28T01:22:12.327249589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 663.140792ms" Jan 28 01:22:12.337165 containerd[1726]: time="2026-01-28T01:22:12.337123347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 677.136229ms" Jan 28 01:22:12.876037 containerd[1726]: time="2026-01-28T01:22:12.875959323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:22:12.876585 containerd[1726]: time="2026-01-28T01:22:12.876373963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:22:12.876585 containerd[1726]: time="2026-01-28T01:22:12.876414243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:12.876696 containerd[1726]: time="2026-01-28T01:22:12.876553083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:12.879479 kubelet[2792]: E0128 01:22:12.879441 2792 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:22:12.882003 containerd[1726]: time="2026-01-28T01:22:12.881812922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:22:12.882003 containerd[1726]: time="2026-01-28T01:22:12.881891362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:22:12.882003 containerd[1726]: time="2026-01-28T01:22:12.881907402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:12.882213 containerd[1726]: time="2026-01-28T01:22:12.881972202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:12.884543 containerd[1726]: time="2026-01-28T01:22:12.884472161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:22:12.884630 containerd[1726]: time="2026-01-28T01:22:12.884528801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:22:12.884630 containerd[1726]: time="2026-01-28T01:22:12.884539801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:12.885496 containerd[1726]: time="2026-01-28T01:22:12.885451721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:12.897121 systemd[1]: Started cri-containerd-12b12e7694ecff73228439f11ee800532d9ddf0d5b0ac2d54e6332fb5d886033.scope - libcontainer container 12b12e7694ecff73228439f11ee800532d9ddf0d5b0ac2d54e6332fb5d886033. Jan 28 01:22:12.904582 systemd[1]: Started cri-containerd-cc4d13516314b1604af0828ed5ce7724260e44c9f55ef7701c000f579d35fe3e.scope - libcontainer container cc4d13516314b1604af0828ed5ce7724260e44c9f55ef7701c000f579d35fe3e. Jan 28 01:22:12.922546 systemd[1]: Started cri-containerd-474013199c7fed3a03726f24f2e2ee82a43a8ede9d794970b02138e2471372d2.scope - libcontainer container 474013199c7fed3a03726f24f2e2ee82a43a8ede9d794970b02138e2471372d2. Jan 28 01:22:12.964720 containerd[1726]: time="2026-01-28T01:22:12.964594266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-6d8ceced70,Uid:13eb60e2321217e61a6037188b76e18d,Namespace:kube-system,Attempt:0,} returns sandbox id \"474013199c7fed3a03726f24f2e2ee82a43a8ede9d794970b02138e2471372d2\"" Jan 28 01:22:12.968923 containerd[1726]: time="2026-01-28T01:22:12.968077305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-6d8ceced70,Uid:734fa22b64dc2e96fcfa74b1107b48a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc4d13516314b1604af0828ed5ce7724260e44c9f55ef7701c000f579d35fe3e\"" Jan 28 01:22:12.970526 containerd[1726]: time="2026-01-28T01:22:12.970494825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-6d8ceced70,Uid:d917e9f18c3d7e4ac1824b51df8a640b,Namespace:kube-system,Attempt:0,} returns sandbox id \"12b12e7694ecff73228439f11ee800532d9ddf0d5b0ac2d54e6332fb5d886033\"" Jan 28 01:22:12.973762 containerd[1726]: time="2026-01-28T01:22:12.973663544Z" level=info msg="CreateContainer within sandbox \"474013199c7fed3a03726f24f2e2ee82a43a8ede9d794970b02138e2471372d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:22:12.978083 containerd[1726]: time="2026-01-28T01:22:12.978054423Z" level=info msg="CreateContainer within sandbox \"cc4d13516314b1604af0828ed5ce7724260e44c9f55ef7701c000f579d35fe3e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:22:12.982073 containerd[1726]: time="2026-01-28T01:22:12.982039662Z" level=info msg="CreateContainer within sandbox \"12b12e7694ecff73228439f11ee800532d9ddf0d5b0ac2d54e6332fb5d886033\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:22:13.021117 containerd[1726]: time="2026-01-28T01:22:13.021073455Z" level=info msg="CreateContainer within sandbox \"474013199c7fed3a03726f24f2e2ee82a43a8ede9d794970b02138e2471372d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b1b01c7f487d8e3289058152d82deeba892e57573ea10247e9c011b2e178eb7\"" Jan 28 01:22:13.021850 containerd[1726]: time="2026-01-28T01:22:13.021814375Z" level=info msg="StartContainer for \"1b1b01c7f487d8e3289058152d82deeba892e57573ea10247e9c011b2e178eb7\"" Jan 28 01:22:13.043605 containerd[1726]: time="2026-01-28T01:22:13.043419970Z" level=info msg="CreateContainer within sandbox \"cc4d13516314b1604af0828ed5ce7724260e44c9f55ef7701c000f579d35fe3e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f7e2614ba2478b0bed24b35f5f2112859c72e71447df350dbd8425d6e652a9b\"" Jan 28 01:22:13.044141 containerd[1726]: time="2026-01-28T01:22:13.044013930Z" level=info msg="StartContainer for \"7f7e2614ba2478b0bed24b35f5f2112859c72e71447df350dbd8425d6e652a9b\"" Jan 28 01:22:13.047496 containerd[1726]: time="2026-01-28T01:22:13.045999370Z" level=info msg="CreateContainer within sandbox \"12b12e7694ecff73228439f11ee800532d9ddf0d5b0ac2d54e6332fb5d886033\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a72481da8afd921a12fc6e5cda3aa32fdaf637569037dc4d613ddb22baef6de\"" Jan 28 01:22:13.047042 systemd[1]: Started cri-containerd-1b1b01c7f487d8e3289058152d82deeba892e57573ea10247e9c011b2e178eb7.scope - libcontainer container 1b1b01c7f487d8e3289058152d82deeba892e57573ea10247e9c011b2e178eb7. Jan 28 01:22:13.047816 containerd[1726]: time="2026-01-28T01:22:13.047792250Z" level=info msg="StartContainer for \"3a72481da8afd921a12fc6e5cda3aa32fdaf637569037dc4d613ddb22baef6de\"" Jan 28 01:22:13.080003 systemd[1]: Started cri-containerd-7f7e2614ba2478b0bed24b35f5f2112859c72e71447df350dbd8425d6e652a9b.scope - libcontainer container 7f7e2614ba2478b0bed24b35f5f2112859c72e71447df350dbd8425d6e652a9b. Jan 28 01:22:13.096030 systemd[1]: Started cri-containerd-3a72481da8afd921a12fc6e5cda3aa32fdaf637569037dc4d613ddb22baef6de.scope - libcontainer container 3a72481da8afd921a12fc6e5cda3aa32fdaf637569037dc4d613ddb22baef6de. Jan 28 01:22:13.097959 kubelet[2792]: E0128 01:22:13.097829 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-6d8ceced70?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="3.2s" Jan 28 01:22:13.103178 containerd[1726]: time="2026-01-28T01:22:13.103141439Z" level=info msg="StartContainer for \"1b1b01c7f487d8e3289058152d82deeba892e57573ea10247e9c011b2e178eb7\" returns successfully" Jan 28 01:22:13.136124 kubelet[2792]: E0128 01:22:13.135933 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d8ceced70\" not found" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:13.150999 containerd[1726]: time="2026-01-28T01:22:13.150319230Z" level=info msg="StartContainer for \"7f7e2614ba2478b0bed24b35f5f2112859c72e71447df350dbd8425d6e652a9b\" returns successfully" Jan 28 01:22:13.173654 containerd[1726]: time="2026-01-28T01:22:13.173606225Z" level=info msg="StartContainer for \"3a72481da8afd921a12fc6e5cda3aa32fdaf637569037dc4d613ddb22baef6de\" returns successfully" Jan 28 01:22:13.369520 kubelet[2792]: I0128 01:22:13.369491 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:14.148438 kubelet[2792]: E0128 01:22:14.148406 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d8ceced70\" not found" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:14.152218 kubelet[2792]: E0128 01:22:14.152195 2792 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-6d8ceced70\" not found" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:14.969825 kubelet[2792]: I0128 01:22:14.969785 2792 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:14.992581 kubelet[2792]: I0128 01:22:14.992544 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:15.075843 kubelet[2792]: E0128 01:22:15.075736 2792 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-6d8ceced70.188ec07921289ba9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-6d8ceced70,UID:ci-4081.3.6-n-6d8ceced70,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-6d8ceced70,},FirstTimestamp:2026-01-28 01:22:10.083068841 +0000 UTC m=+0.943181752,LastTimestamp:2026-01-28 01:22:10.083068841 +0000 UTC m=+0.943181752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-6d8ceced70,}" Jan 28 01:22:15.081337 kubelet[2792]: I0128 01:22:15.081306 2792 apiserver.go:52] "Watching apiserver" Jan 28 01:22:15.092113 kubelet[2792]: E0128 01:22:15.092079 2792 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:15.092113 kubelet[2792]: I0128 01:22:15.092107 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:15.092787 kubelet[2792]: I0128 01:22:15.092767 2792 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 01:22:15.109682 kubelet[2792]: E0128 01:22:15.109642 2792 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:15.109682 kubelet[2792]: I0128 01:22:15.109676 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:15.121851 kubelet[2792]: E0128 01:22:15.121426 2792 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-6d8ceced70\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:15.151331 kubelet[2792]: I0128 01:22:15.151300 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:15.151676 kubelet[2792]: I0128 01:22:15.151617 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:15.158692 kubelet[2792]: E0128 01:22:15.158500 2792 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-6d8ceced70\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:15.159018 kubelet[2792]: E0128 01:22:15.158996 2792 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:16.153630 kubelet[2792]: I0128 01:22:16.153278 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:16.153630 kubelet[2792]: I0128 01:22:16.153402 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:16.161307 kubelet[2792]: I0128 01:22:16.161279 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:22:16.172404 kubelet[2792]: I0128 01:22:16.171383 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:22:17.603545 kubelet[2792]: I0128 01:22:17.603515 2792 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:17.614203 kubelet[2792]: I0128 01:22:17.612808 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:22:17.707116 systemd[1]: Reloading requested from client PID 3075 ('systemctl') (unit session-9.scope)... Jan 28 01:22:17.707394 systemd[1]: Reloading... Jan 28 01:22:17.784881 zram_generator::config[3115]: No configuration found. Jan 28 01:22:17.889047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:22:17.979014 systemd[1]: Reloading finished in 271 ms. Jan 28 01:22:18.014343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:22:18.027357 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:22:18.027540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:22:18.027585 systemd[1]: kubelet.service: Consumed 1.217s CPU time, 123.4M memory peak, 0B memory swap peak. Jan 28 01:22:18.032502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:22:18.396315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:22:18.400120 (kubelet)[3179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:22:18.445498 kubelet[3179]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:22:18.445498 kubelet[3179]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:22:18.445498 kubelet[3179]: I0128 01:22:18.444726 3179 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:22:18.453331 kubelet[3179]: I0128 01:22:18.452110 3179 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 01:22:18.453331 kubelet[3179]: I0128 01:22:18.452134 3179 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:22:18.453331 kubelet[3179]: I0128 01:22:18.452191 3179 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 01:22:18.453331 kubelet[3179]: I0128 01:22:18.452199 3179 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:22:18.453331 kubelet[3179]: I0128 01:22:18.452425 3179 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:22:18.454009 kubelet[3179]: I0128 01:22:18.453991 3179 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 01:22:18.456452 kubelet[3179]: I0128 01:22:18.456433 3179 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:22:18.460886 kubelet[3179]: E0128 01:22:18.460857 3179 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:22:18.461041 kubelet[3179]: I0128 01:22:18.461024 3179 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 28 01:22:18.466861 kubelet[3179]: I0128 01:22:18.466767 3179 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 01:22:18.466994 kubelet[3179]: I0128 01:22:18.466964 3179 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:22:18.467126 kubelet[3179]: I0128 01:22:18.466993 3179 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-6d8ceced70","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:22:18.467204 kubelet[3179]: I0128 01:22:18.467127 3179 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:22:18.467204 kubelet[3179]: I0128 01:22:18.467135 3179 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 01:22:18.467204 kubelet[3179]: I0128 01:22:18.467156 3179 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 01:22:18.467902 kubelet[3179]: I0128 01:22:18.467887 3179 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:22:18.468022 kubelet[3179]: I0128 01:22:18.468012 3179 kubelet.go:475] "Attempting to sync node with API server" Jan 28 01:22:18.468055 kubelet[3179]: I0128 01:22:18.468027 3179 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:22:18.468055 kubelet[3179]: I0128 01:22:18.468051 3179 kubelet.go:387] "Adding apiserver pod source" Jan 28 01:22:18.468909 kubelet[3179]: I0128 01:22:18.468063 3179 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:22:18.471593 kubelet[3179]: I0128 01:22:18.471568 3179 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:22:18.474212 kubelet[3179]: I0128 01:22:18.474192 3179 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:22:18.474272 kubelet[3179]: I0128 01:22:18.474223 3179 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 01:22:18.479878 kubelet[3179]: I0128 01:22:18.478309 3179 server.go:1262] "Started kubelet" Jan 28 01:22:18.479878 kubelet[3179]: I0128 01:22:18.479243 3179 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:22:18.485298 kubelet[3179]: I0128 01:22:18.485276 3179 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:22:18.486434 kubelet[3179]: I0128 01:22:18.486413 3179 server.go:310] "Adding debug handlers to kubelet server" Jan 28 01:22:18.489828 kubelet[3179]: I0128 01:22:18.489779 3179 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:22:18.490622 kubelet[3179]: I0128 01:22:18.490603 3179 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 01:22:18.490849 kubelet[3179]: I0128 01:22:18.490822 3179 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:22:18.491135 kubelet[3179]: I0128 01:22:18.491114 3179 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:22:18.492766 kubelet[3179]: I0128 01:22:18.492751 3179 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 01:22:18.493222 kubelet[3179]: E0128 01:22:18.493204 3179 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-6d8ceced70\" not found" Jan 28 01:22:18.496098 kubelet[3179]: I0128 01:22:18.493563 3179 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 01:22:18.496203 kubelet[3179]: I0128 01:22:18.493651 3179 reconciler.go:29] "Reconciler: start to sync state" Jan 28 01:22:18.506060 kubelet[3179]: I0128 01:22:18.506036 3179 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:22:18.506297 kubelet[3179]: I0128 01:22:18.506279 3179 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:22:18.508799 kubelet[3179]: E0128 01:22:18.508779 3179 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:22:18.516026 kubelet[3179]: I0128 01:22:18.515824 3179 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:22:18.516426 kubelet[3179]: I0128 01:22:18.515936 3179 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 01:22:18.517379 kubelet[3179]: I0128 01:22:18.517284 3179 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 01:22:18.517379 kubelet[3179]: I0128 01:22:18.517311 3179 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 01:22:18.517379 kubelet[3179]: I0128 01:22:18.517363 3179 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 01:22:18.517533 kubelet[3179]: E0128 01:22:18.517409 3179 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:22:18.558591 kubelet[3179]: I0128 01:22:18.558568 3179 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:22:18.558591 kubelet[3179]: I0128 01:22:18.558582 3179 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:22:18.558591 kubelet[3179]: I0128 01:22:18.558600 3179 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:22:18.558763 kubelet[3179]: I0128 01:22:18.558717 3179 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:22:18.558763 kubelet[3179]: I0128 01:22:18.558727 3179 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:22:18.558763 kubelet[3179]: I0128 01:22:18.558742 3179 policy_none.go:49] "None policy: Start" Jan 28 01:22:18.558763 kubelet[3179]: I0128 01:22:18.558750 3179 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 01:22:18.558763 kubelet[3179]: I0128 01:22:18.558758 3179 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 01:22:18.558892 kubelet[3179]: I0128 01:22:18.558860 3179 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 28 01:22:18.558892 kubelet[3179]: I0128 01:22:18.558869 3179 policy_none.go:47] "Start" Jan 28 01:22:18.563527 kubelet[3179]: E0128 01:22:18.563503 3179 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:22:18.565445 kubelet[3179]: I0128 01:22:18.564999 3179 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:22:18.565445 kubelet[3179]: I0128 01:22:18.565020 3179 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:22:18.565445 kubelet[3179]: I0128 01:22:18.565373 3179 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:22:18.566439 kubelet[3179]: E0128 01:22:18.566420 3179 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:22:18.618480 kubelet[3179]: I0128 01:22:18.618439 3179 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.618690 kubelet[3179]: I0128 01:22:18.618678 3179 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.618923 kubelet[3179]: I0128 01:22:18.618898 3179 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.631882 kubelet[3179]: I0128 01:22:18.631829 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:22:18.632002 kubelet[3179]: E0128 01:22:18.631921 3179 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-6d8ceced70\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.632902 kubelet[3179]: I0128 01:22:18.632876 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:22:18.632983 kubelet[3179]: I0128 01:22:18.632915 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:22:18.632983 kubelet[3179]: E0128 01:22:18.632942 3179 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.633029 kubelet[3179]: E0128 01:22:18.632989 3179 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.666951 kubelet[3179]: I0128 01:22:18.666854 3179 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.676963 kubelet[3179]: I0128 01:22:18.676938 3179 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.677043 kubelet[3179]: I0128 01:22:18.677008 3179 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.798068 kubelet[3179]: I0128 01:22:18.798035 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/734fa22b64dc2e96fcfa74b1107b48a1-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" (UID: \"734fa22b64dc2e96fcfa74b1107b48a1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.798068 kubelet[3179]: I0128 01:22:18.798069 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/734fa22b64dc2e96fcfa74b1107b48a1-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" (UID: \"734fa22b64dc2e96fcfa74b1107b48a1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.798222 kubelet[3179]: I0128 01:22:18.798089 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.798222 kubelet[3179]: I0128 01:22:18.798105 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.798222 kubelet[3179]: I0128 01:22:18.798124 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/734fa22b64dc2e96fcfa74b1107b48a1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" (UID: \"734fa22b64dc2e96fcfa74b1107b48a1\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.798222 kubelet[3179]: I0128 01:22:18.798140 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.798222 kubelet[3179]: I0128 01:22:18.798156 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.798353 kubelet[3179]: I0128 01:22:18.798191 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/13eb60e2321217e61a6037188b76e18d-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" (UID: \"13eb60e2321217e61a6037188b76e18d\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:18.798353 kubelet[3179]: I0128 01:22:18.798221 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d917e9f18c3d7e4ac1824b51df8a640b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-6d8ceced70\" (UID: \"d917e9f18c3d7e4ac1824b51df8a640b\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:19.469295 kubelet[3179]: I0128 01:22:19.469250 3179 apiserver.go:52] "Watching apiserver" Jan 28 01:22:19.497329 kubelet[3179]: I0128 01:22:19.497281 3179 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 01:22:19.551769 kubelet[3179]: I0128 01:22:19.550804 3179 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:19.551769 kubelet[3179]: I0128 01:22:19.551270 3179 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:19.555647 kubelet[3179]: I0128 01:22:19.554305 3179 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:19.565077 kubelet[3179]: I0128 01:22:19.565045 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:22:19.565384 kubelet[3179]: I0128 01:22:19.565338 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" podStartSLOduration=3.565327509 podStartE2EDuration="3.565327509s" podCreationTimestamp="2026-01-28 01:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:22:19.549442672 +0000 UTC m=+1.144685780" watchObservedRunningTime="2026-01-28 01:22:19.565327509 +0000 UTC m=+1.160570617" Jan 28 01:22:19.565460 kubelet[3179]: I0128 01:22:19.565444 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" podStartSLOduration=2.5654397490000003 podStartE2EDuration="2.565439749s" podCreationTimestamp="2026-01-28 01:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:22:19.565192309 +0000 UTC m=+1.160435417" watchObservedRunningTime="2026-01-28 01:22:19.565439749 +0000 UTC m=+1.160682817" Jan 28 01:22:19.565562 kubelet[3179]: E0128 01:22:19.565544 3179 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-6d8ceced70\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:19.569478 kubelet[3179]: I0128 01:22:19.569448 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:22:19.569568 kubelet[3179]: E0128 01:22:19.569490 3179 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-6d8ceced70\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:19.569651 kubelet[3179]: I0128 01:22:19.569632 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:22:19.569686 kubelet[3179]: E0128 01:22:19.569665 3179 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-6d8ceced70\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-6d8ceced70" Jan 28 01:22:19.576870 kubelet[3179]: I0128 01:22:19.576570 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-6d8ceced70" podStartSLOduration=3.576558547 podStartE2EDuration="3.576558547s" podCreationTimestamp="2026-01-28 01:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:22:19.576348387 +0000 UTC m=+1.171591495" watchObservedRunningTime="2026-01-28 01:22:19.576558547 +0000 UTC m=+1.171801655" Jan 28 01:22:25.052827 kubelet[3179]: I0128 01:22:25.052796 3179 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:22:25.053238 containerd[1726]: time="2026-01-28T01:22:25.053098457Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:22:25.053430 kubelet[3179]: I0128 01:22:25.053405 3179 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:22:25.783690 systemd[1]: Created slice kubepods-besteffort-pod8c8f47ba_b8a4_480b_bc12_cdb6f032955a.slice - libcontainer container kubepods-besteffort-pod8c8f47ba_b8a4_480b_bc12_cdb6f032955a.slice. Jan 28 01:22:25.833354 kubelet[3179]: I0128 01:22:25.833310 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c8f47ba-b8a4-480b-bc12-cdb6f032955a-xtables-lock\") pod \"kube-proxy-8qtqt\" (UID: \"8c8f47ba-b8a4-480b-bc12-cdb6f032955a\") " pod="kube-system/kube-proxy-8qtqt" Jan 28 01:22:25.833354 kubelet[3179]: I0128 01:22:25.833351 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2px2b\" (UniqueName: \"kubernetes.io/projected/8c8f47ba-b8a4-480b-bc12-cdb6f032955a-kube-api-access-2px2b\") pod \"kube-proxy-8qtqt\" (UID: \"8c8f47ba-b8a4-480b-bc12-cdb6f032955a\") " pod="kube-system/kube-proxy-8qtqt" Jan 28 01:22:25.833593 kubelet[3179]: I0128 01:22:25.833388 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c8f47ba-b8a4-480b-bc12-cdb6f032955a-kube-proxy\") pod \"kube-proxy-8qtqt\" (UID: \"8c8f47ba-b8a4-480b-bc12-cdb6f032955a\") " pod="kube-system/kube-proxy-8qtqt" Jan 28 01:22:25.833593 kubelet[3179]: I0128 01:22:25.833403 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c8f47ba-b8a4-480b-bc12-cdb6f032955a-lib-modules\") pod \"kube-proxy-8qtqt\" (UID: \"8c8f47ba-b8a4-480b-bc12-cdb6f032955a\") " pod="kube-system/kube-proxy-8qtqt" Jan 28 01:22:26.098484 containerd[1726]: time="2026-01-28T01:22:26.096621424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8qtqt,Uid:8c8f47ba-b8a4-480b-bc12-cdb6f032955a,Namespace:kube-system,Attempt:0,}" Jan 28 01:22:26.143219 containerd[1726]: time="2026-01-28T01:22:26.143084412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:22:26.143571 containerd[1726]: time="2026-01-28T01:22:26.143140812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:22:26.143571 containerd[1726]: time="2026-01-28T01:22:26.143358212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:26.144119 containerd[1726]: time="2026-01-28T01:22:26.143583451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:26.180455 systemd[1]: Started cri-containerd-f6190f2485a0585863cde2a7c689b36caaf0978fd37b05c6214c34c9da5dc24e.scope - libcontainer container f6190f2485a0585863cde2a7c689b36caaf0978fd37b05c6214c34c9da5dc24e. Jan 28 01:22:26.193288 systemd[1]: Created slice kubepods-besteffort-pod2f278b65_f338_46b4_8714_a3b9b0180558.slice - libcontainer container kubepods-besteffort-pod2f278b65_f338_46b4_8714_a3b9b0180558.slice. Jan 28 01:22:26.217341 containerd[1726]: time="2026-01-28T01:22:26.217299992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8qtqt,Uid:8c8f47ba-b8a4-480b-bc12-cdb6f032955a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6190f2485a0585863cde2a7c689b36caaf0978fd37b05c6214c34c9da5dc24e\"" Jan 28 01:22:26.226122 containerd[1726]: time="2026-01-28T01:22:26.225846270Z" level=info msg="CreateContainer within sandbox \"f6190f2485a0585863cde2a7c689b36caaf0978fd37b05c6214c34c9da5dc24e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:22:26.235838 kubelet[3179]: I0128 01:22:26.235802 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f278b65-f338-46b4-8714-a3b9b0180558-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-9xr9d\" (UID: \"2f278b65-f338-46b4-8714-a3b9b0180558\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9xr9d" Jan 28 01:22:26.236149 kubelet[3179]: I0128 01:22:26.235853 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwcmz\" (UniqueName: \"kubernetes.io/projected/2f278b65-f338-46b4-8714-a3b9b0180558-kube-api-access-vwcmz\") pod \"tigera-operator-65cdcdfd6d-9xr9d\" (UID: \"2f278b65-f338-46b4-8714-a3b9b0180558\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9xr9d" Jan 28 01:22:26.258948 containerd[1726]: time="2026-01-28T01:22:26.258905341Z" level=info msg="CreateContainer within sandbox \"f6190f2485a0585863cde2a7c689b36caaf0978fd37b05c6214c34c9da5dc24e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08952ee0a776a0d8af76add047d3c29dbbce7e269d279cb07c8d64528f135f8c\"" Jan 28 01:22:26.259814 containerd[1726]: time="2026-01-28T01:22:26.259780301Z" level=info msg="StartContainer for \"08952ee0a776a0d8af76add047d3c29dbbce7e269d279cb07c8d64528f135f8c\"" Jan 28 01:22:26.281008 systemd[1]: Started cri-containerd-08952ee0a776a0d8af76add047d3c29dbbce7e269d279cb07c8d64528f135f8c.scope - libcontainer container 08952ee0a776a0d8af76add047d3c29dbbce7e269d279cb07c8d64528f135f8c. Jan 28 01:22:26.312193 containerd[1726]: time="2026-01-28T01:22:26.312073327Z" level=info msg="StartContainer for \"08952ee0a776a0d8af76add047d3c29dbbce7e269d279cb07c8d64528f135f8c\" returns successfully" Jan 28 01:22:26.502513 containerd[1726]: time="2026-01-28T01:22:26.502473918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9xr9d,Uid:2f278b65-f338-46b4-8714-a3b9b0180558,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:22:26.543058 containerd[1726]: time="2026-01-28T01:22:26.542959547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:22:26.543206 containerd[1726]: time="2026-01-28T01:22:26.543021107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:22:26.543206 containerd[1726]: time="2026-01-28T01:22:26.543134067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:26.543775 containerd[1726]: time="2026-01-28T01:22:26.543724667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:26.558058 systemd[1]: Started cri-containerd-82ae199018ac46ac20a05bcfac5c64cfb6339ec7af5ca3a34be0b117cc7c8372.scope - libcontainer container 82ae199018ac46ac20a05bcfac5c64cfb6339ec7af5ca3a34be0b117cc7c8372. Jan 28 01:22:26.597365 containerd[1726]: time="2026-01-28T01:22:26.597331133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9xr9d,Uid:2f278b65-f338-46b4-8714-a3b9b0180558,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"82ae199018ac46ac20a05bcfac5c64cfb6339ec7af5ca3a34be0b117cc7c8372\"" Jan 28 01:22:26.599427 containerd[1726]: time="2026-01-28T01:22:26.599172332Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:22:28.041115 kubelet[3179]: I0128 01:22:28.040671 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8qtqt" podStartSLOduration=3.040657214 podStartE2EDuration="3.040657214s" podCreationTimestamp="2026-01-28 01:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:22:26.578243738 +0000 UTC m=+8.173486846" watchObservedRunningTime="2026-01-28 01:22:28.040657214 +0000 UTC m=+9.635900442" Jan 28 01:22:28.374707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916416354.mount: Deactivated successfully. Jan 28 01:22:28.869130 containerd[1726]: time="2026-01-28T01:22:28.868278798Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:28.873990 containerd[1726]: time="2026-01-28T01:22:28.873963180Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 28 01:22:28.876386 containerd[1726]: time="2026-01-28T01:22:28.876361853Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:28.880153 containerd[1726]: time="2026-01-28T01:22:28.880100441Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:28.881073 containerd[1726]: time="2026-01-28T01:22:28.881044038Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.281836106s" Jan 28 01:22:28.881170 containerd[1726]: time="2026-01-28T01:22:28.881154558Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 28 01:22:28.888241 containerd[1726]: time="2026-01-28T01:22:28.887852898Z" level=info msg="CreateContainer within sandbox \"82ae199018ac46ac20a05bcfac5c64cfb6339ec7af5ca3a34be0b117cc7c8372\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:22:28.914212 containerd[1726]: time="2026-01-28T01:22:28.914170377Z" level=info msg="CreateContainer within sandbox \"82ae199018ac46ac20a05bcfac5c64cfb6339ec7af5ca3a34be0b117cc7c8372\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ba90d3ece042ba17cdb9bd99ce95c1c10a6040e1968acb4c0f3b63001c844f42\"" Jan 28 01:22:28.914967 containerd[1726]: time="2026-01-28T01:22:28.914764935Z" level=info msg="StartContainer for \"ba90d3ece042ba17cdb9bd99ce95c1c10a6040e1968acb4c0f3b63001c844f42\"" Jan 28 01:22:28.940007 systemd[1]: Started cri-containerd-ba90d3ece042ba17cdb9bd99ce95c1c10a6040e1968acb4c0f3b63001c844f42.scope - libcontainer container ba90d3ece042ba17cdb9bd99ce95c1c10a6040e1968acb4c0f3b63001c844f42. Jan 28 01:22:28.967436 containerd[1726]: time="2026-01-28T01:22:28.967394014Z" level=info msg="StartContainer for \"ba90d3ece042ba17cdb9bd99ce95c1c10a6040e1968acb4c0f3b63001c844f42\" returns successfully" Jan 28 01:22:34.755278 sudo[2226]: pam_unix(sudo:session): session closed for user root Jan 28 01:22:34.829089 sshd[2223]: pam_unix(sshd:session): session closed for user core Jan 28 01:22:34.832258 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:38374.service: Deactivated successfully. Jan 28 01:22:34.834486 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:22:34.834707 systemd[1]: session-9.scope: Consumed 6.296s CPU time, 152.1M memory peak, 0B memory swap peak. Jan 28 01:22:34.838995 systemd-logind[1690]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:22:34.840764 systemd-logind[1690]: Removed session 9. Jan 28 01:22:48.474817 kubelet[3179]: I0128 01:22:48.474555 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-9xr9d" podStartSLOduration=20.191364948 podStartE2EDuration="22.474540652s" podCreationTimestamp="2026-01-28 01:22:26 +0000 UTC" firstStartedPulling="2026-01-28 01:22:26.598773692 +0000 UTC m=+8.194016800" lastFinishedPulling="2026-01-28 01:22:28.881949396 +0000 UTC m=+10.477192504" observedRunningTime="2026-01-28 01:22:29.584892841 +0000 UTC m=+11.180135949" watchObservedRunningTime="2026-01-28 01:22:48.474540652 +0000 UTC m=+30.069783760" Jan 28 01:22:48.507882 systemd[1]: Created slice kubepods-besteffort-pod53873102_d2ce_4d68_8ed0_a8bab30fe19f.slice - libcontainer container kubepods-besteffort-pod53873102_d2ce_4d68_8ed0_a8bab30fe19f.slice. Jan 28 01:22:48.580910 kubelet[3179]: I0128 01:22:48.580693 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53873102-d2ce-4d68-8ed0-a8bab30fe19f-tigera-ca-bundle\") pod \"calico-typha-647b46cc75-k9f6p\" (UID: \"53873102-d2ce-4d68-8ed0-a8bab30fe19f\") " pod="calico-system/calico-typha-647b46cc75-k9f6p" Jan 28 01:22:48.580910 kubelet[3179]: I0128 01:22:48.580737 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/53873102-d2ce-4d68-8ed0-a8bab30fe19f-typha-certs\") pod \"calico-typha-647b46cc75-k9f6p\" (UID: \"53873102-d2ce-4d68-8ed0-a8bab30fe19f\") " pod="calico-system/calico-typha-647b46cc75-k9f6p" Jan 28 01:22:48.580910 kubelet[3179]: I0128 01:22:48.580755 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9rf6\" (UniqueName: \"kubernetes.io/projected/53873102-d2ce-4d68-8ed0-a8bab30fe19f-kube-api-access-q9rf6\") pod \"calico-typha-647b46cc75-k9f6p\" (UID: \"53873102-d2ce-4d68-8ed0-a8bab30fe19f\") " pod="calico-system/calico-typha-647b46cc75-k9f6p" Jan 28 01:22:48.610249 systemd[1]: Created slice kubepods-besteffort-podacffb1f1_834a_40f5_ab07_b9a452f92639.slice - libcontainer container kubepods-besteffort-podacffb1f1_834a_40f5_ab07_b9a452f92639.slice. Jan 28 01:22:48.681584 kubelet[3179]: I0128 01:22:48.681101 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/acffb1f1-834a-40f5-ab07-b9a452f92639-var-lib-calico\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.681584 kubelet[3179]: I0128 01:22:48.681140 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/acffb1f1-834a-40f5-ab07-b9a452f92639-cni-bin-dir\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.681584 kubelet[3179]: I0128 01:22:48.681256 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/acffb1f1-834a-40f5-ab07-b9a452f92639-policysync\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.681584 kubelet[3179]: I0128 01:22:48.681273 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/acffb1f1-834a-40f5-ab07-b9a452f92639-cni-log-dir\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.681584 kubelet[3179]: I0128 01:22:48.681288 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acffb1f1-834a-40f5-ab07-b9a452f92639-tigera-ca-bundle\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.681810 kubelet[3179]: I0128 01:22:48.681328 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/acffb1f1-834a-40f5-ab07-b9a452f92639-var-run-calico\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.681810 kubelet[3179]: I0128 01:22:48.681367 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/acffb1f1-834a-40f5-ab07-b9a452f92639-cni-net-dir\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.681810 kubelet[3179]: I0128 01:22:48.681480 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzs9r\" (UniqueName: \"kubernetes.io/projected/acffb1f1-834a-40f5-ab07-b9a452f92639-kube-api-access-pzs9r\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.681810 kubelet[3179]: I0128 01:22:48.681512 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/acffb1f1-834a-40f5-ab07-b9a452f92639-flexvol-driver-host\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.681810 kubelet[3179]: I0128 01:22:48.681532 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acffb1f1-834a-40f5-ab07-b9a452f92639-xtables-lock\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.682167 kubelet[3179]: I0128 01:22:48.681970 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acffb1f1-834a-40f5-ab07-b9a452f92639-lib-modules\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.682167 kubelet[3179]: I0128 01:22:48.681999 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/acffb1f1-834a-40f5-ab07-b9a452f92639-node-certs\") pod \"calico-node-nwgmg\" (UID: \"acffb1f1-834a-40f5-ab07-b9a452f92639\") " pod="calico-system/calico-node-nwgmg" Jan 28 01:22:48.737468 kubelet[3179]: E0128 01:22:48.736676 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:22:48.784651 kubelet[3179]: I0128 01:22:48.784609 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f68d28e5-4350-4cc7-aede-a307338915a7-registration-dir\") pod \"csi-node-driver-s8nm6\" (UID: \"f68d28e5-4350-4cc7-aede-a307338915a7\") " pod="calico-system/csi-node-driver-s8nm6" Jan 28 01:22:48.784916 kubelet[3179]: I0128 01:22:48.784724 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f68d28e5-4350-4cc7-aede-a307338915a7-kubelet-dir\") pod \"csi-node-driver-s8nm6\" (UID: \"f68d28e5-4350-4cc7-aede-a307338915a7\") " pod="calico-system/csi-node-driver-s8nm6" Jan 28 01:22:48.784916 kubelet[3179]: I0128 01:22:48.784791 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f68d28e5-4350-4cc7-aede-a307338915a7-socket-dir\") pod \"csi-node-driver-s8nm6\" (UID: \"f68d28e5-4350-4cc7-aede-a307338915a7\") " pod="calico-system/csi-node-driver-s8nm6" Jan 28 01:22:48.785207 kubelet[3179]: E0128 01:22:48.785174 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.785207 kubelet[3179]: W0128 01:22:48.785191 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.785323 kubelet[3179]: E0128 01:22:48.785209 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.785526 kubelet[3179]: E0128 01:22:48.785506 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.785526 kubelet[3179]: W0128 01:22:48.785522 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.785733 kubelet[3179]: E0128 01:22:48.785536 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.785733 kubelet[3179]: E0128 01:22:48.785719 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.785733 kubelet[3179]: W0128 01:22:48.785727 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.785813 kubelet[3179]: E0128 01:22:48.785736 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.788293 kubelet[3179]: E0128 01:22:48.787891 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.788293 kubelet[3179]: W0128 01:22:48.787908 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.788293 kubelet[3179]: E0128 01:22:48.787941 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.788293 kubelet[3179]: I0128 01:22:48.787968 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f68d28e5-4350-4cc7-aede-a307338915a7-varrun\") pod \"csi-node-driver-s8nm6\" (UID: \"f68d28e5-4350-4cc7-aede-a307338915a7\") " pod="calico-system/csi-node-driver-s8nm6" Jan 28 01:22:48.788293 kubelet[3179]: E0128 01:22:48.788199 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.788293 kubelet[3179]: W0128 01:22:48.788212 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.788293 kubelet[3179]: E0128 01:22:48.788225 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.789532 kubelet[3179]: E0128 01:22:48.789254 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.789532 kubelet[3179]: W0128 01:22:48.789273 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.789532 kubelet[3179]: E0128 01:22:48.789289 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.789970 kubelet[3179]: E0128 01:22:48.789802 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.789970 kubelet[3179]: W0128 01:22:48.789817 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.789970 kubelet[3179]: E0128 01:22:48.789830 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.790092 kubelet[3179]: E0128 01:22:48.790042 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.790092 kubelet[3179]: W0128 01:22:48.790052 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.790092 kubelet[3179]: E0128 01:22:48.790066 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.790934 kubelet[3179]: E0128 01:22:48.790206 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.790934 kubelet[3179]: W0128 01:22:48.790220 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.790934 kubelet[3179]: E0128 01:22:48.790229 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.792001 kubelet[3179]: E0128 01:22:48.791135 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.792001 kubelet[3179]: W0128 01:22:48.791150 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.792001 kubelet[3179]: E0128 01:22:48.791169 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.792452 kubelet[3179]: E0128 01:22:48.792348 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.792452 kubelet[3179]: W0128 01:22:48.792361 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.792452 kubelet[3179]: E0128 01:22:48.792375 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.792716 kubelet[3179]: E0128 01:22:48.792616 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.792716 kubelet[3179]: W0128 01:22:48.792629 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.792716 kubelet[3179]: E0128 01:22:48.792640 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.792970 kubelet[3179]: E0128 01:22:48.792957 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.793146 kubelet[3179]: W0128 01:22:48.793033 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.793146 kubelet[3179]: E0128 01:22:48.793050 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.793354 kubelet[3179]: E0128 01:22:48.793340 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.793901 kubelet[3179]: W0128 01:22:48.793422 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.793901 kubelet[3179]: E0128 01:22:48.793438 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.794150 kubelet[3179]: E0128 01:22:48.794134 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.794309 kubelet[3179]: W0128 01:22:48.794205 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.794309 kubelet[3179]: E0128 01:22:48.794224 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.794497 kubelet[3179]: E0128 01:22:48.794486 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.794736 kubelet[3179]: W0128 01:22:48.794569 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.794736 kubelet[3179]: E0128 01:22:48.794586 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.795635 kubelet[3179]: E0128 01:22:48.795110 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.795635 kubelet[3179]: W0128 01:22:48.795125 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.795635 kubelet[3179]: E0128 01:22:48.795136 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.796004 kubelet[3179]: E0128 01:22:48.795899 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.796004 kubelet[3179]: W0128 01:22:48.795914 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.796004 kubelet[3179]: E0128 01:22:48.795927 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.796735 kubelet[3179]: E0128 01:22:48.796723 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.796794 kubelet[3179]: W0128 01:22:48.796784 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.796875 kubelet[3179]: E0128 01:22:48.796864 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.796993 kubelet[3179]: I0128 01:22:48.796946 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9n4\" (UniqueName: \"kubernetes.io/projected/f68d28e5-4350-4cc7-aede-a307338915a7-kube-api-access-cl9n4\") pod \"csi-node-driver-s8nm6\" (UID: \"f68d28e5-4350-4cc7-aede-a307338915a7\") " pod="calico-system/csi-node-driver-s8nm6" Jan 28 01:22:48.797263 kubelet[3179]: E0128 01:22:48.797146 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.797263 kubelet[3179]: W0128 01:22:48.797157 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.797263 kubelet[3179]: E0128 01:22:48.797169 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.797446 kubelet[3179]: E0128 01:22:48.797432 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.797511 kubelet[3179]: W0128 01:22:48.797499 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.797562 kubelet[3179]: E0128 01:22:48.797552 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.797795 kubelet[3179]: E0128 01:22:48.797781 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.798011 kubelet[3179]: W0128 01:22:48.797880 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.798011 kubelet[3179]: E0128 01:22:48.797896 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.798285 kubelet[3179]: E0128 01:22:48.798185 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.798285 kubelet[3179]: W0128 01:22:48.798200 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.798285 kubelet[3179]: E0128 01:22:48.798211 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.798477 kubelet[3179]: E0128 01:22:48.798465 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.798535 kubelet[3179]: W0128 01:22:48.798525 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.798589 kubelet[3179]: E0128 01:22:48.798579 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.798922 kubelet[3179]: E0128 01:22:48.798811 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.798922 kubelet[3179]: W0128 01:22:48.798822 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.798922 kubelet[3179]: E0128 01:22:48.798858 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.799250 kubelet[3179]: E0128 01:22:48.799136 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.799250 kubelet[3179]: W0128 01:22:48.799150 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.799250 kubelet[3179]: E0128 01:22:48.799161 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.799418 kubelet[3179]: E0128 01:22:48.799408 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.799488 kubelet[3179]: W0128 01:22:48.799468 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.799579 kubelet[3179]: E0128 01:22:48.799563 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.799843 kubelet[3179]: E0128 01:22:48.799821 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.799843 kubelet[3179]: W0128 01:22:48.799860 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.799843 kubelet[3179]: E0128 01:22:48.799873 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.800567 kubelet[3179]: E0128 01:22:48.800548 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.800827 kubelet[3179]: W0128 01:22:48.800654 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.800827 kubelet[3179]: E0128 01:22:48.800673 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.801176 kubelet[3179]: E0128 01:22:48.801163 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.802064 kubelet[3179]: W0128 01:22:48.801911 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.802064 kubelet[3179]: E0128 01:22:48.801952 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.802553 kubelet[3179]: E0128 01:22:48.802539 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.802629 kubelet[3179]: W0128 01:22:48.802618 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.802730 kubelet[3179]: E0128 01:22:48.802679 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.805034 kubelet[3179]: E0128 01:22:48.804914 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.805034 kubelet[3179]: W0128 01:22:48.804937 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.805034 kubelet[3179]: E0128 01:22:48.804951 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.805351 kubelet[3179]: E0128 01:22:48.805231 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.805351 kubelet[3179]: W0128 01:22:48.805241 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.805351 kubelet[3179]: E0128 01:22:48.805252 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.805558 kubelet[3179]: E0128 01:22:48.805527 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.805558 kubelet[3179]: W0128 01:22:48.805538 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.805686 kubelet[3179]: E0128 01:22:48.805549 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.807411 kubelet[3179]: E0128 01:22:48.807369 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.807411 kubelet[3179]: W0128 01:22:48.807387 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.807411 kubelet[3179]: E0128 01:22:48.807403 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.807791 kubelet[3179]: E0128 01:22:48.807775 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.807791 kubelet[3179]: W0128 01:22:48.807788 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.808069 kubelet[3179]: E0128 01:22:48.807799 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.810059 kubelet[3179]: E0128 01:22:48.810036 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.810059 kubelet[3179]: W0128 01:22:48.810054 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.810578 kubelet[3179]: E0128 01:22:48.810068 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.810915 kubelet[3179]: E0128 01:22:48.810894 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.810915 kubelet[3179]: W0128 01:22:48.810913 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.811039 kubelet[3179]: E0128 01:22:48.810926 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.812019 kubelet[3179]: E0128 01:22:48.811995 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.812019 kubelet[3179]: W0128 01:22:48.812011 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.812019 kubelet[3179]: E0128 01:22:48.812025 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.812196 kubelet[3179]: E0128 01:22:48.812178 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.812196 kubelet[3179]: W0128 01:22:48.812188 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.812196 kubelet[3179]: E0128 01:22:48.812197 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.812506 kubelet[3179]: E0128 01:22:48.812490 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.812506 kubelet[3179]: W0128 01:22:48.812503 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.812610 kubelet[3179]: E0128 01:22:48.812515 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.816455 kubelet[3179]: E0128 01:22:48.815003 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.816455 kubelet[3179]: W0128 01:22:48.815021 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.816455 kubelet[3179]: E0128 01:22:48.815035 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.816455 kubelet[3179]: E0128 01:22:48.815213 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.816455 kubelet[3179]: W0128 01:22:48.815220 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.816455 kubelet[3179]: E0128 01:22:48.815229 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.816631 kubelet[3179]: E0128 01:22:48.816558 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.816631 kubelet[3179]: W0128 01:22:48.816569 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.816631 kubelet[3179]: E0128 01:22:48.816581 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.818241 kubelet[3179]: E0128 01:22:48.817734 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.818241 kubelet[3179]: W0128 01:22:48.817753 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.818241 kubelet[3179]: E0128 01:22:48.817765 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.818241 kubelet[3179]: E0128 01:22:48.817948 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.818241 kubelet[3179]: W0128 01:22:48.817957 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.818241 kubelet[3179]: E0128 01:22:48.817967 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.824162 containerd[1726]: time="2026-01-28T01:22:48.824124568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647b46cc75-k9f6p,Uid:53873102-d2ce-4d68-8ed0-a8bab30fe19f,Namespace:calico-system,Attempt:0,}" Jan 28 01:22:48.896396 containerd[1726]: time="2026-01-28T01:22:48.896286646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:22:48.896396 containerd[1726]: time="2026-01-28T01:22:48.896339086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:22:48.896396 containerd[1726]: time="2026-01-28T01:22:48.896356646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:48.896632 containerd[1726]: time="2026-01-28T01:22:48.896428446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:48.909550 kubelet[3179]: E0128 01:22:48.908982 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.909550 kubelet[3179]: W0128 01:22:48.909003 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.909550 kubelet[3179]: E0128 01:22:48.909023 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.910323 kubelet[3179]: E0128 01:22:48.910160 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.910323 kubelet[3179]: W0128 01:22:48.910176 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.910323 kubelet[3179]: E0128 01:22:48.910189 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.910504 kubelet[3179]: E0128 01:22:48.910492 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.910629 kubelet[3179]: W0128 01:22:48.910536 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.910629 kubelet[3179]: E0128 01:22:48.910550 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.911130 kubelet[3179]: E0128 01:22:48.910932 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.911130 kubelet[3179]: W0128 01:22:48.910944 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.911130 kubelet[3179]: E0128 01:22:48.910954 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.911465 kubelet[3179]: E0128 01:22:48.911292 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.911465 kubelet[3179]: W0128 01:22:48.911304 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.911465 kubelet[3179]: E0128 01:22:48.911315 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.911740 kubelet[3179]: E0128 01:22:48.911678 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.911740 kubelet[3179]: W0128 01:22:48.911692 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.911740 kubelet[3179]: E0128 01:22:48.911703 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.912254 kubelet[3179]: E0128 01:22:48.912198 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.912254 kubelet[3179]: W0128 01:22:48.912214 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.912254 kubelet[3179]: E0128 01:22:48.912229 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.912713 kubelet[3179]: E0128 01:22:48.912549 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.912713 kubelet[3179]: W0128 01:22:48.912565 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.912713 kubelet[3179]: E0128 01:22:48.912579 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.912809 kubelet[3179]: E0128 01:22:48.912771 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.912809 kubelet[3179]: W0128 01:22:48.912780 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.912809 kubelet[3179]: E0128 01:22:48.912790 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.913075 kubelet[3179]: E0128 01:22:48.912946 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.913075 kubelet[3179]: W0128 01:22:48.912959 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.913075 kubelet[3179]: E0128 01:22:48.912970 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.913156 kubelet[3179]: E0128 01:22:48.913113 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.913156 kubelet[3179]: W0128 01:22:48.913120 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.913156 kubelet[3179]: E0128 01:22:48.913127 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.913384 kubelet[3179]: E0128 01:22:48.913258 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.913384 kubelet[3179]: W0128 01:22:48.913272 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.913384 kubelet[3179]: E0128 01:22:48.913280 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.913486 kubelet[3179]: E0128 01:22:48.913425 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.913486 kubelet[3179]: W0128 01:22:48.913432 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.913486 kubelet[3179]: E0128 01:22:48.913440 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.913723 kubelet[3179]: E0128 01:22:48.913560 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.913723 kubelet[3179]: W0128 01:22:48.913571 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.913723 kubelet[3179]: E0128 01:22:48.913579 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.914102 kubelet[3179]: E0128 01:22:48.914080 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.914102 kubelet[3179]: W0128 01:22:48.914094 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.914284 kubelet[3179]: E0128 01:22:48.914106 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.914601 kubelet[3179]: E0128 01:22:48.914513 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.914601 kubelet[3179]: W0128 01:22:48.914529 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.914601 kubelet[3179]: E0128 01:22:48.914543 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.914747 kubelet[3179]: E0128 01:22:48.914730 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.914747 kubelet[3179]: W0128 01:22:48.914744 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.915633 kubelet[3179]: E0128 01:22:48.914755 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.915023 systemd[1]: Started cri-containerd-5ba33e11f384819d6161e568e0a38c6b839e3a9e3cad1540cc1a26d1dd89bf2c.scope - libcontainer container 5ba33e11f384819d6161e568e0a38c6b839e3a9e3cad1540cc1a26d1dd89bf2c. Jan 28 01:22:48.916129 kubelet[3179]: E0128 01:22:48.916024 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.916129 kubelet[3179]: W0128 01:22:48.916046 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.916129 kubelet[3179]: E0128 01:22:48.916063 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.916502 kubelet[3179]: E0128 01:22:48.916393 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.916502 kubelet[3179]: W0128 01:22:48.916409 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.916502 kubelet[3179]: E0128 01:22:48.916420 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.916816 kubelet[3179]: E0128 01:22:48.916798 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.916816 kubelet[3179]: W0128 01:22:48.916813 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.917104 kubelet[3179]: E0128 01:22:48.916827 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.917104 kubelet[3179]: E0128 01:22:48.917047 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.917104 kubelet[3179]: W0128 01:22:48.917055 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.917104 kubelet[3179]: E0128 01:22:48.917065 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.917295 kubelet[3179]: E0128 01:22:48.917238 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.917295 kubelet[3179]: W0128 01:22:48.917248 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.917295 kubelet[3179]: E0128 01:22:48.917262 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.918427 kubelet[3179]: E0128 01:22:48.918404 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.918427 kubelet[3179]: W0128 01:22:48.918421 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.918522 kubelet[3179]: E0128 01:22:48.918434 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.919511 kubelet[3179]: E0128 01:22:48.919482 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.919511 kubelet[3179]: W0128 01:22:48.919499 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.919793 kubelet[3179]: E0128 01:22:48.919528 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.925631 kubelet[3179]: E0128 01:22:48.920090 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.925631 kubelet[3179]: W0128 01:22:48.920215 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.925631 kubelet[3179]: E0128 01:22:48.920228 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.929637 containerd[1726]: time="2026-01-28T01:22:48.929598747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nwgmg,Uid:acffb1f1-834a-40f5-ab07-b9a452f92639,Namespace:calico-system,Attempt:0,}" Jan 28 01:22:48.930625 kubelet[3179]: E0128 01:22:48.930602 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:48.930625 kubelet[3179]: W0128 01:22:48.930618 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:48.930703 kubelet[3179]: E0128 01:22:48.930632 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:48.955169 containerd[1726]: time="2026-01-28T01:22:48.955063452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647b46cc75-k9f6p,Uid:53873102-d2ce-4d68-8ed0-a8bab30fe19f,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ba33e11f384819d6161e568e0a38c6b839e3a9e3cad1540cc1a26d1dd89bf2c\"" Jan 28 01:22:48.957615 containerd[1726]: time="2026-01-28T01:22:48.957519571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:22:48.969854 containerd[1726]: time="2026-01-28T01:22:48.969650324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:22:48.969854 containerd[1726]: time="2026-01-28T01:22:48.969697604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:22:48.969854 containerd[1726]: time="2026-01-28T01:22:48.969712403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:48.969854 containerd[1726]: time="2026-01-28T01:22:48.969776763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:22:48.984014 systemd[1]: Started cri-containerd-50b835795cfeba735f8e9cf9614aef544c9650ea739c67765982da4e86b4c201.scope - libcontainer container 50b835795cfeba735f8e9cf9614aef544c9650ea739c67765982da4e86b4c201. Jan 28 01:22:49.005617 containerd[1726]: time="2026-01-28T01:22:49.005511183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nwgmg,Uid:acffb1f1-834a-40f5-ab07-b9a452f92639,Namespace:calico-system,Attempt:0,} returns sandbox id \"50b835795cfeba735f8e9cf9614aef544c9650ea739c67765982da4e86b4c201\"" Jan 28 01:22:50.416953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392147203.mount: Deactivated successfully. Jan 28 01:22:50.519113 kubelet[3179]: E0128 01:22:50.519074 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:22:51.310161 containerd[1726]: time="2026-01-28T01:22:51.310113881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:51.312369 containerd[1726]: time="2026-01-28T01:22:51.312215840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 28 01:22:51.315352 containerd[1726]: time="2026-01-28T01:22:51.315099199Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:51.318815 containerd[1726]: time="2026-01-28T01:22:51.318788716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:51.319377 containerd[1726]: time="2026-01-28T01:22:51.319349156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.361725145s" Jan 28 01:22:51.319429 containerd[1726]: time="2026-01-28T01:22:51.319379596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 28 01:22:51.326206 containerd[1726]: time="2026-01-28T01:22:51.326172152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:22:51.339128 containerd[1726]: time="2026-01-28T01:22:51.339097025Z" level=info msg="CreateContainer within sandbox \"5ba33e11f384819d6161e568e0a38c6b839e3a9e3cad1540cc1a26d1dd89bf2c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:22:51.394736 containerd[1726]: time="2026-01-28T01:22:51.394692472Z" level=info msg="CreateContainer within sandbox \"5ba33e11f384819d6161e568e0a38c6b839e3a9e3cad1540cc1a26d1dd89bf2c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"274bbab991f8403833bb4ec55523e3b4811b20c3dd34d485ba0098a19a3baf04\"" Jan 28 01:22:51.395279 containerd[1726]: time="2026-01-28T01:22:51.395144432Z" level=info msg="StartContainer for \"274bbab991f8403833bb4ec55523e3b4811b20c3dd34d485ba0098a19a3baf04\"" Jan 28 01:22:51.430039 systemd[1]: Started cri-containerd-274bbab991f8403833bb4ec55523e3b4811b20c3dd34d485ba0098a19a3baf04.scope - libcontainer container 274bbab991f8403833bb4ec55523e3b4811b20c3dd34d485ba0098a19a3baf04. Jan 28 01:22:51.464055 containerd[1726]: time="2026-01-28T01:22:51.464007352Z" level=info msg="StartContainer for \"274bbab991f8403833bb4ec55523e3b4811b20c3dd34d485ba0098a19a3baf04\" returns successfully" Jan 28 01:22:51.645807 kubelet[3179]: I0128 01:22:51.644722 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-647b46cc75-k9f6p" podStartSLOduration=1.281064822 podStartE2EDuration="3.644706727s" podCreationTimestamp="2026-01-28 01:22:48 +0000 UTC" firstStartedPulling="2026-01-28 01:22:48.956744811 +0000 UTC m=+30.551987919" lastFinishedPulling="2026-01-28 01:22:51.320386716 +0000 UTC m=+32.915629824" observedRunningTime="2026-01-28 01:22:51.644523167 +0000 UTC m=+33.239766275" watchObservedRunningTime="2026-01-28 01:22:51.644706727 +0000 UTC m=+33.239949835" Jan 28 01:22:51.681971 kubelet[3179]: E0128 01:22:51.681936 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.681971 kubelet[3179]: W0128 01:22:51.681962 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.682122 kubelet[3179]: E0128 01:22:51.681986 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.682779 kubelet[3179]: E0128 01:22:51.682649 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.682779 kubelet[3179]: W0128 01:22:51.682664 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.682779 kubelet[3179]: E0128 01:22:51.682703 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.683470 kubelet[3179]: E0128 01:22:51.683362 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.683470 kubelet[3179]: W0128 01:22:51.683377 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.683470 kubelet[3179]: E0128 01:22:51.683390 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.684301 kubelet[3179]: E0128 01:22:51.684284 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.684391 kubelet[3179]: W0128 01:22:51.684380 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.684860 kubelet[3179]: E0128 01:22:51.684433 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.685138 kubelet[3179]: E0128 01:22:51.685123 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.685309 kubelet[3179]: W0128 01:22:51.685220 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.685309 kubelet[3179]: E0128 01:22:51.685238 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.685649 kubelet[3179]: E0128 01:22:51.685475 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.685649 kubelet[3179]: W0128 01:22:51.685486 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.685649 kubelet[3179]: E0128 01:22:51.685496 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.687952 kubelet[3179]: E0128 01:22:51.687935 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.688144 kubelet[3179]: W0128 01:22:51.688033 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.688144 kubelet[3179]: E0128 01:22:51.688052 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.688340 kubelet[3179]: E0128 01:22:51.688329 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.688491 kubelet[3179]: W0128 01:22:51.688395 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.688491 kubelet[3179]: E0128 01:22:51.688409 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.688662 kubelet[3179]: E0128 01:22:51.688651 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.688794 kubelet[3179]: W0128 01:22:51.688718 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.688794 kubelet[3179]: E0128 01:22:51.688733 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.689075 kubelet[3179]: E0128 01:22:51.688992 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.689075 kubelet[3179]: W0128 01:22:51.689003 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.689075 kubelet[3179]: E0128 01:22:51.689014 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.689291 kubelet[3179]: E0128 01:22:51.689280 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.689359 kubelet[3179]: W0128 01:22:51.689348 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.689478 kubelet[3179]: E0128 01:22:51.689403 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.689630 kubelet[3179]: E0128 01:22:51.689619 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.689693 kubelet[3179]: W0128 01:22:51.689683 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.689814 kubelet[3179]: E0128 01:22:51.689739 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.690081 kubelet[3179]: E0128 01:22:51.689989 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.690081 kubelet[3179]: W0128 01:22:51.690001 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.690081 kubelet[3179]: E0128 01:22:51.690011 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.690993 kubelet[3179]: E0128 01:22:51.690975 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.691080 kubelet[3179]: W0128 01:22:51.691068 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.691132 kubelet[3179]: E0128 01:22:51.691122 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.691423 kubelet[3179]: E0128 01:22:51.691343 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.691423 kubelet[3179]: W0128 01:22:51.691354 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.691423 kubelet[3179]: E0128 01:22:51.691364 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.731940 kubelet[3179]: E0128 01:22:51.731910 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.731940 kubelet[3179]: W0128 01:22:51.731932 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.732441 kubelet[3179]: E0128 01:22:51.731952 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.732441 kubelet[3179]: E0128 01:22:51.732156 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.732441 kubelet[3179]: W0128 01:22:51.732165 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.732441 kubelet[3179]: E0128 01:22:51.732174 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.732615 kubelet[3179]: E0128 01:22:51.732600 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.732615 kubelet[3179]: W0128 01:22:51.732613 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.732681 kubelet[3179]: E0128 01:22:51.732624 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.734020 kubelet[3179]: E0128 01:22:51.734001 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.734020 kubelet[3179]: W0128 01:22:51.734016 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.736070 kubelet[3179]: E0128 01:22:51.734030 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.736190 kubelet[3179]: E0128 01:22:51.736171 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.736240 kubelet[3179]: W0128 01:22:51.736193 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.736240 kubelet[3179]: E0128 01:22:51.736209 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.736417 kubelet[3179]: E0128 01:22:51.736402 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.736417 kubelet[3179]: W0128 01:22:51.736414 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.736526 kubelet[3179]: E0128 01:22:51.736424 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.736695 kubelet[3179]: E0128 01:22:51.736680 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.736695 kubelet[3179]: W0128 01:22:51.736693 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.736787 kubelet[3179]: E0128 01:22:51.736704 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.736941 kubelet[3179]: E0128 01:22:51.736923 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.736941 kubelet[3179]: W0128 01:22:51.736935 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.737118 kubelet[3179]: E0128 01:22:51.736945 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.737206 kubelet[3179]: E0128 01:22:51.737192 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.737206 kubelet[3179]: W0128 01:22:51.737204 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.737269 kubelet[3179]: E0128 01:22:51.737213 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.737681 kubelet[3179]: E0128 01:22:51.737578 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.737681 kubelet[3179]: W0128 01:22:51.737595 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.737681 kubelet[3179]: E0128 01:22:51.737610 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.737930 kubelet[3179]: E0128 01:22:51.737849 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.737930 kubelet[3179]: W0128 01:22:51.737860 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.737930 kubelet[3179]: E0128 01:22:51.737870 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.738266 kubelet[3179]: E0128 01:22:51.738197 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.738266 kubelet[3179]: W0128 01:22:51.738209 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.738266 kubelet[3179]: E0128 01:22:51.738220 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.738631 kubelet[3179]: E0128 01:22:51.738535 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.738631 kubelet[3179]: W0128 01:22:51.738548 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.738631 kubelet[3179]: E0128 01:22:51.738559 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.738989 kubelet[3179]: E0128 01:22:51.738922 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.738989 kubelet[3179]: W0128 01:22:51.738934 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.738989 kubelet[3179]: E0128 01:22:51.738944 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.739419 kubelet[3179]: E0128 01:22:51.739295 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.739419 kubelet[3179]: W0128 01:22:51.739308 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.739419 kubelet[3179]: E0128 01:22:51.739318 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.739577 kubelet[3179]: E0128 01:22:51.739567 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.739689 kubelet[3179]: W0128 01:22:51.739623 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.739689 kubelet[3179]: E0128 01:22:51.739637 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.740097 kubelet[3179]: E0128 01:22:51.740083 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.740908 kubelet[3179]: W0128 01:22:51.740254 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.740908 kubelet[3179]: E0128 01:22:51.740272 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:51.741284 kubelet[3179]: E0128 01:22:51.741258 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:51.741454 kubelet[3179]: W0128 01:22:51.741439 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:51.741525 kubelet[3179]: E0128 01:22:51.741514 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.518414 kubelet[3179]: E0128 01:22:52.518085 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:22:52.696664 kubelet[3179]: E0128 01:22:52.696356 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.696664 kubelet[3179]: W0128 01:22:52.696380 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.696664 kubelet[3179]: E0128 01:22:52.696403 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.697615 kubelet[3179]: E0128 01:22:52.697281 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.697773 kubelet[3179]: W0128 01:22:52.697295 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.697773 kubelet[3179]: E0128 01:22:52.697668 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.698006 kubelet[3179]: E0128 01:22:52.697994 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.698134 kubelet[3179]: W0128 01:22:52.698120 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.698362 kubelet[3179]: E0128 01:22:52.698235 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.698584 kubelet[3179]: E0128 01:22:52.698572 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.698953 kubelet[3179]: W0128 01:22:52.698774 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.699458 kubelet[3179]: E0128 01:22:52.699155 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.700253 kubelet[3179]: E0128 01:22:52.700125 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.700253 kubelet[3179]: W0128 01:22:52.700141 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.700253 kubelet[3179]: E0128 01:22:52.700153 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.700805 kubelet[3179]: E0128 01:22:52.700720 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.700805 kubelet[3179]: W0128 01:22:52.700733 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.700805 kubelet[3179]: E0128 01:22:52.700744 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.701173 kubelet[3179]: E0128 01:22:52.701160 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.701395 kubelet[3179]: W0128 01:22:52.701251 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.701395 kubelet[3179]: E0128 01:22:52.701303 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.701543 kubelet[3179]: E0128 01:22:52.701532 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.701661 kubelet[3179]: W0128 01:22:52.701613 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.701661 kubelet[3179]: E0128 01:22:52.701628 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.702027 kubelet[3179]: E0128 01:22:52.702012 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.702258 kubelet[3179]: W0128 01:22:52.702085 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.702258 kubelet[3179]: E0128 01:22:52.702106 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.702456 kubelet[3179]: E0128 01:22:52.702444 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.702690 kubelet[3179]: W0128 01:22:52.702559 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.702690 kubelet[3179]: E0128 01:22:52.702606 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.702944 kubelet[3179]: E0128 01:22:52.702928 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.703204 kubelet[3179]: W0128 01:22:52.703098 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.703204 kubelet[3179]: E0128 01:22:52.703117 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.703509 kubelet[3179]: E0128 01:22:52.703462 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.703713 kubelet[3179]: W0128 01:22:52.703567 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.703713 kubelet[3179]: E0128 01:22:52.703608 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.704421 kubelet[3179]: E0128 01:22:52.704310 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.704421 kubelet[3179]: W0128 01:22:52.704321 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.704421 kubelet[3179]: E0128 01:22:52.704334 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.704667 kubelet[3179]: E0128 01:22:52.704655 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.704737 kubelet[3179]: W0128 01:22:52.704726 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.704914 kubelet[3179]: E0128 01:22:52.704805 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.705732 kubelet[3179]: E0128 01:22:52.705517 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.705732 kubelet[3179]: W0128 01:22:52.705532 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.705732 kubelet[3179]: E0128 01:22:52.705547 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.732881 containerd[1726]: time="2026-01-28T01:22:52.732333221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:52.735228 containerd[1726]: time="2026-01-28T01:22:52.735095499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 28 01:22:52.738316 containerd[1726]: time="2026-01-28T01:22:52.738052858Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:52.739494 kubelet[3179]: E0128 01:22:52.739473 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.739584 kubelet[3179]: W0128 01:22:52.739569 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.739664 kubelet[3179]: E0128 01:22:52.739651 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.739954 kubelet[3179]: E0128 01:22:52.739941 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.740038 kubelet[3179]: W0128 01:22:52.740012 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.740038 kubelet[3179]: E0128 01:22:52.740027 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.740405 kubelet[3179]: E0128 01:22:52.740327 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.740405 kubelet[3179]: W0128 01:22:52.740338 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.740405 kubelet[3179]: E0128 01:22:52.740349 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.740583 kubelet[3179]: E0128 01:22:52.740569 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.740635 kubelet[3179]: W0128 01:22:52.740583 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.740635 kubelet[3179]: E0128 01:22:52.740610 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.740803 kubelet[3179]: E0128 01:22:52.740789 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.740803 kubelet[3179]: W0128 01:22:52.740802 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.740918 kubelet[3179]: E0128 01:22:52.740813 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.741008 kubelet[3179]: E0128 01:22:52.740985 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.741043 kubelet[3179]: W0128 01:22:52.741008 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.741043 kubelet[3179]: E0128 01:22:52.741019 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.741233 kubelet[3179]: E0128 01:22:52.741216 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.741233 kubelet[3179]: W0128 01:22:52.741231 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.741307 kubelet[3179]: E0128 01:22:52.741241 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.741696 kubelet[3179]: E0128 01:22:52.741593 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.741696 kubelet[3179]: W0128 01:22:52.741610 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.741696 kubelet[3179]: E0128 01:22:52.741623 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.742151 kubelet[3179]: E0128 01:22:52.742018 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.742151 kubelet[3179]: W0128 01:22:52.742035 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.742151 kubelet[3179]: E0128 01:22:52.742050 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.742273 containerd[1726]: time="2026-01-28T01:22:52.742051815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:52.742951 kubelet[3179]: E0128 01:22:52.742740 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.742951 kubelet[3179]: W0128 01:22:52.742756 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.742951 kubelet[3179]: E0128 01:22:52.742770 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.743847 kubelet[3179]: E0128 01:22:52.743599 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.743847 kubelet[3179]: W0128 01:22:52.743611 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.743847 kubelet[3179]: E0128 01:22:52.743622 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.743933 containerd[1726]: time="2026-01-28T01:22:52.743215694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.417004782s" Jan 28 01:22:52.743933 containerd[1726]: time="2026-01-28T01:22:52.743245054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 28 01:22:52.744061 kubelet[3179]: E0128 01:22:52.744049 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.744165 kubelet[3179]: W0128 01:22:52.744103 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.744165 kubelet[3179]: E0128 01:22:52.744118 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.744380 kubelet[3179]: E0128 01:22:52.744368 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.744449 kubelet[3179]: W0128 01:22:52.744439 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.744502 kubelet[3179]: E0128 01:22:52.744491 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.745131 kubelet[3179]: E0128 01:22:52.745116 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.747103 kubelet[3179]: W0128 01:22:52.745904 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.747103 kubelet[3179]: E0128 01:22:52.745928 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.747103 kubelet[3179]: E0128 01:22:52.746247 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.747103 kubelet[3179]: W0128 01:22:52.746277 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.747103 kubelet[3179]: E0128 01:22:52.746290 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.747103 kubelet[3179]: E0128 01:22:52.746704 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.747103 kubelet[3179]: W0128 01:22:52.746716 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.747103 kubelet[3179]: E0128 01:22:52.746744 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.747103 kubelet[3179]: E0128 01:22:52.746959 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.747103 kubelet[3179]: W0128 01:22:52.746988 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.747360 kubelet[3179]: E0128 01:22:52.747000 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.747969 kubelet[3179]: E0128 01:22:52.747387 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:22:52.747969 kubelet[3179]: W0128 01:22:52.747404 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:22:52.747969 kubelet[3179]: E0128 01:22:52.747416 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:22:52.751629 containerd[1726]: time="2026-01-28T01:22:52.751601369Z" level=info msg="CreateContainer within sandbox \"50b835795cfeba735f8e9cf9614aef544c9650ea739c67765982da4e86b4c201\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:22:52.781966 containerd[1726]: time="2026-01-28T01:22:52.781736751Z" level=info msg="CreateContainer within sandbox \"50b835795cfeba735f8e9cf9614aef544c9650ea739c67765982da4e86b4c201\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8f33ccfb402b18b0d55dd75f876a7325ba8f3f00f37db4892e5eac09137aa2ef\"" Jan 28 01:22:52.785011 containerd[1726]: time="2026-01-28T01:22:52.783985989Z" level=info msg="StartContainer for \"8f33ccfb402b18b0d55dd75f876a7325ba8f3f00f37db4892e5eac09137aa2ef\"" Jan 28 01:22:52.818016 systemd[1]: Started cri-containerd-8f33ccfb402b18b0d55dd75f876a7325ba8f3f00f37db4892e5eac09137aa2ef.scope - libcontainer container 8f33ccfb402b18b0d55dd75f876a7325ba8f3f00f37db4892e5eac09137aa2ef. Jan 28 01:22:52.846415 containerd[1726]: time="2026-01-28T01:22:52.846369551Z" level=info msg="StartContainer for \"8f33ccfb402b18b0d55dd75f876a7325ba8f3f00f37db4892e5eac09137aa2ef\" returns successfully" Jan 28 01:22:52.853022 systemd[1]: cri-containerd-8f33ccfb402b18b0d55dd75f876a7325ba8f3f00f37db4892e5eac09137aa2ef.scope: Deactivated successfully. Jan 28 01:22:52.873363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f33ccfb402b18b0d55dd75f876a7325ba8f3f00f37db4892e5eac09137aa2ef-rootfs.mount: Deactivated successfully. Jan 28 01:22:53.910634 containerd[1726]: time="2026-01-28T01:22:53.910501420Z" level=info msg="shim disconnected" id=8f33ccfb402b18b0d55dd75f876a7325ba8f3f00f37db4892e5eac09137aa2ef namespace=k8s.io Jan 28 01:22:53.910634 containerd[1726]: time="2026-01-28T01:22:53.910581899Z" level=warning msg="cleaning up after shim disconnected" id=8f33ccfb402b18b0d55dd75f876a7325ba8f3f00f37db4892e5eac09137aa2ef namespace=k8s.io Jan 28 01:22:53.910634 containerd[1726]: time="2026-01-28T01:22:53.910591539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:22:54.518768 kubelet[3179]: E0128 01:22:54.518428 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:22:54.629657 containerd[1726]: time="2026-01-28T01:22:54.629621099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:22:56.518457 kubelet[3179]: E0128 01:22:56.518113 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:22:57.910959 containerd[1726]: time="2026-01-28T01:22:57.910910689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:57.913405 containerd[1726]: time="2026-01-28T01:22:57.913245848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 28 01:22:57.917948 containerd[1726]: time="2026-01-28T01:22:57.917030966Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:57.921083 containerd[1726]: time="2026-01-28T01:22:57.921056883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:22:57.921669 containerd[1726]: time="2026-01-28T01:22:57.921639403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.291979904s" Jan 28 01:22:57.921721 containerd[1726]: time="2026-01-28T01:22:57.921671843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 28 01:22:57.928598 containerd[1726]: time="2026-01-28T01:22:57.928572038Z" level=info msg="CreateContainer within sandbox \"50b835795cfeba735f8e9cf9614aef544c9650ea739c67765982da4e86b4c201\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:22:57.969670 containerd[1726]: time="2026-01-28T01:22:57.969623813Z" level=info msg="CreateContainer within sandbox \"50b835795cfeba735f8e9cf9614aef544c9650ea739c67765982da4e86b4c201\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5037381d336de362b911032479543072e50e9dc8b2d23723f5b10e51086100a6\"" Jan 28 01:22:57.970810 containerd[1726]: time="2026-01-28T01:22:57.970597813Z" level=info msg="StartContainer for \"5037381d336de362b911032479543072e50e9dc8b2d23723f5b10e51086100a6\"" Jan 28 01:22:57.996984 systemd[1]: Started cri-containerd-5037381d336de362b911032479543072e50e9dc8b2d23723f5b10e51086100a6.scope - libcontainer container 5037381d336de362b911032479543072e50e9dc8b2d23723f5b10e51086100a6. Jan 28 01:22:58.027288 containerd[1726]: time="2026-01-28T01:22:58.027243738Z" level=info msg="StartContainer for \"5037381d336de362b911032479543072e50e9dc8b2d23723f5b10e51086100a6\" returns successfully" Jan 28 01:22:58.519340 kubelet[3179]: E0128 01:22:58.519286 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:22:59.204151 containerd[1726]: time="2026-01-28T01:22:59.204091937Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:22:59.206787 systemd[1]: cri-containerd-5037381d336de362b911032479543072e50e9dc8b2d23723f5b10e51086100a6.scope: Deactivated successfully. Jan 28 01:22:59.225910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5037381d336de362b911032479543072e50e9dc8b2d23723f5b10e51086100a6-rootfs.mount: Deactivated successfully. Jan 28 01:22:59.290038 kubelet[3179]: I0128 01:22:59.289799 3179 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 28 01:23:00.066110 systemd[1]: Created slice kubepods-besteffort-podf68d28e5_4350_4cc7_aede_a307338915a7.slice - libcontainer container kubepods-besteffort-podf68d28e5_4350_4cc7_aede_a307338915a7.slice. Jan 28 01:23:00.068275 containerd[1726]: time="2026-01-28T01:23:00.068089571Z" level=info msg="shim disconnected" id=5037381d336de362b911032479543072e50e9dc8b2d23723f5b10e51086100a6 namespace=k8s.io Jan 28 01:23:00.068275 containerd[1726]: time="2026-01-28T01:23:00.068157171Z" level=warning msg="cleaning up after shim disconnected" id=5037381d336de362b911032479543072e50e9dc8b2d23723f5b10e51086100a6 namespace=k8s.io Jan 28 01:23:00.068275 containerd[1726]: time="2026-01-28T01:23:00.068165931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:23:00.081744 containerd[1726]: time="2026-01-28T01:23:00.081628200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nm6,Uid:f68d28e5-4350-4cc7-aede-a307338915a7,Namespace:calico-system,Attempt:0,}" Jan 28 01:23:00.087587 systemd[1]: Created slice kubepods-burstable-podb93794f0_c760_43f8_9817_c3814f113c55.slice - libcontainer container kubepods-burstable-podb93794f0_c760_43f8_9817_c3814f113c55.slice. Jan 28 01:23:00.100906 systemd[1]: Created slice kubepods-besteffort-pod1055b396_3282_41c6_8cd5_0cd8ecaec9e4.slice - libcontainer container kubepods-besteffort-pod1055b396_3282_41c6_8cd5_0cd8ecaec9e4.slice. Jan 28 01:23:00.108489 systemd[1]: Created slice kubepods-besteffort-pode15a9a69_173f_490e_af7a_8a44d37eda4d.slice - libcontainer container kubepods-besteffort-pode15a9a69_173f_490e_af7a_8a44d37eda4d.slice. Jan 28 01:23:00.120136 systemd[1]: Created slice kubepods-besteffort-pod7ad0c2f8_bb34_49c9_a1bb_d618f47675e5.slice - libcontainer container kubepods-besteffort-pod7ad0c2f8_bb34_49c9_a1bb_d618f47675e5.slice. Jan 28 01:23:00.126761 systemd[1]: Created slice kubepods-besteffort-pode1527d25_60e3_4960_9f63_e5d366bf57e5.slice - libcontainer container kubepods-besteffort-pode1527d25_60e3_4960_9f63_e5d366bf57e5.slice. Jan 28 01:23:00.135237 systemd[1]: Created slice kubepods-besteffort-pod0d110ad0_2f02_402c_8f06_ffae6a1d70c4.slice - libcontainer container kubepods-besteffort-pod0d110ad0_2f02_402c_8f06_ffae6a1d70c4.slice. Jan 28 01:23:00.155069 systemd[1]: Created slice kubepods-burstable-podfc15c614_7b8d_4699_bd70_980cb39baa43.slice - libcontainer container kubepods-burstable-podfc15c614_7b8d_4699_bd70_980cb39baa43.slice. Jan 28 01:23:00.184065 kubelet[3179]: I0128 01:23:00.183963 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-whisker-ca-bundle\") pod \"whisker-64b8f9cd5f-h7lns\" (UID: \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\") " pod="calico-system/whisker-64b8f9cd5f-h7lns" Jan 28 01:23:00.184065 kubelet[3179]: I0128 01:23:00.184025 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlmj2\" (UniqueName: \"kubernetes.io/projected/b93794f0-c760-43f8-9817-c3814f113c55-kube-api-access-qlmj2\") pod \"coredns-66bc5c9577-lbbmn\" (UID: \"b93794f0-c760-43f8-9817-c3814f113c55\") " pod="kube-system/coredns-66bc5c9577-lbbmn" Jan 28 01:23:00.184065 kubelet[3179]: I0128 01:23:00.184045 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1055b396-3282-41c6-8cd5-0cd8ecaec9e4-config\") pod \"goldmane-7c778bb748-lmhb6\" (UID: \"1055b396-3282-41c6-8cd5-0cd8ecaec9e4\") " pod="calico-system/goldmane-7c778bb748-lmhb6" Jan 28 01:23:00.184065 kubelet[3179]: I0128 01:23:00.184060 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jkx\" (UniqueName: \"kubernetes.io/projected/1055b396-3282-41c6-8cd5-0cd8ecaec9e4-kube-api-access-79jkx\") pod \"goldmane-7c778bb748-lmhb6\" (UID: \"1055b396-3282-41c6-8cd5-0cd8ecaec9e4\") " pod="calico-system/goldmane-7c778bb748-lmhb6" Jan 28 01:23:00.184472 kubelet[3179]: I0128 01:23:00.184080 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e15a9a69-173f-490e-af7a-8a44d37eda4d-calico-apiserver-certs\") pod \"calico-apiserver-69c4f6486c-pzztc\" (UID: \"e15a9a69-173f-490e-af7a-8a44d37eda4d\") " pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" Jan 28 01:23:00.184472 kubelet[3179]: I0128 01:23:00.184094 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b93794f0-c760-43f8-9817-c3814f113c55-config-volume\") pod \"coredns-66bc5c9577-lbbmn\" (UID: \"b93794f0-c760-43f8-9817-c3814f113c55\") " pod="kube-system/coredns-66bc5c9577-lbbmn" Jan 28 01:23:00.184472 kubelet[3179]: I0128 01:23:00.184114 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1527d25-60e3-4960-9f63-e5d366bf57e5-calico-apiserver-certs\") pod \"calico-apiserver-69c4f6486c-snwn4\" (UID: \"e1527d25-60e3-4960-9f63-e5d366bf57e5\") " pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" Jan 28 01:23:00.184472 kubelet[3179]: I0128 01:23:00.184128 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc15c614-7b8d-4699-bd70-980cb39baa43-config-volume\") pod \"coredns-66bc5c9577-qhjz2\" (UID: \"fc15c614-7b8d-4699-bd70-980cb39baa43\") " pod="kube-system/coredns-66bc5c9577-qhjz2" Jan 28 01:23:00.184472 kubelet[3179]: I0128 01:23:00.184146 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2kbq\" (UniqueName: \"kubernetes.io/projected/e15a9a69-173f-490e-af7a-8a44d37eda4d-kube-api-access-f2kbq\") pod \"calico-apiserver-69c4f6486c-pzztc\" (UID: \"e15a9a69-173f-490e-af7a-8a44d37eda4d\") " pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" Jan 28 01:23:00.184588 kubelet[3179]: I0128 01:23:00.184160 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkj8x\" (UniqueName: \"kubernetes.io/projected/fc15c614-7b8d-4699-bd70-980cb39baa43-kube-api-access-nkj8x\") pod \"coredns-66bc5c9577-qhjz2\" (UID: \"fc15c614-7b8d-4699-bd70-980cb39baa43\") " pod="kube-system/coredns-66bc5c9577-qhjz2" Jan 28 01:23:00.184588 kubelet[3179]: I0128 01:23:00.184173 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-whisker-backend-key-pair\") pod \"whisker-64b8f9cd5f-h7lns\" (UID: \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\") " pod="calico-system/whisker-64b8f9cd5f-h7lns" Jan 28 01:23:00.184588 kubelet[3179]: I0128 01:23:00.184190 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28chl\" (UniqueName: \"kubernetes.io/projected/e1527d25-60e3-4960-9f63-e5d366bf57e5-kube-api-access-28chl\") pod \"calico-apiserver-69c4f6486c-snwn4\" (UID: \"e1527d25-60e3-4960-9f63-e5d366bf57e5\") " pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" Jan 28 01:23:00.184588 kubelet[3179]: I0128 01:23:00.184214 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc5c8\" (UniqueName: \"kubernetes.io/projected/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-kube-api-access-bc5c8\") pod \"whisker-64b8f9cd5f-h7lns\" (UID: \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\") " pod="calico-system/whisker-64b8f9cd5f-h7lns" Jan 28 01:23:00.184588 kubelet[3179]: I0128 01:23:00.184233 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1055b396-3282-41c6-8cd5-0cd8ecaec9e4-goldmane-key-pair\") pod \"goldmane-7c778bb748-lmhb6\" (UID: \"1055b396-3282-41c6-8cd5-0cd8ecaec9e4\") " pod="calico-system/goldmane-7c778bb748-lmhb6" Jan 28 01:23:00.184702 kubelet[3179]: I0128 01:23:00.184250 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ad0c2f8-bb34-49c9-a1bb-d618f47675e5-tigera-ca-bundle\") pod \"calico-kube-controllers-56cc7cdcfb-z7vlh\" (UID: \"7ad0c2f8-bb34-49c9-a1bb-d618f47675e5\") " pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" Jan 28 01:23:00.184702 kubelet[3179]: I0128 01:23:00.184265 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tggsn\" (UniqueName: \"kubernetes.io/projected/7ad0c2f8-bb34-49c9-a1bb-d618f47675e5-kube-api-access-tggsn\") pod \"calico-kube-controllers-56cc7cdcfb-z7vlh\" (UID: \"7ad0c2f8-bb34-49c9-a1bb-d618f47675e5\") " pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" Jan 28 01:23:00.184702 kubelet[3179]: I0128 01:23:00.184290 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1055b396-3282-41c6-8cd5-0cd8ecaec9e4-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-lmhb6\" (UID: \"1055b396-3282-41c6-8cd5-0cd8ecaec9e4\") " pod="calico-system/goldmane-7c778bb748-lmhb6" Jan 28 01:23:00.219472 containerd[1726]: time="2026-01-28T01:23:00.219424529Z" level=error msg="Failed to destroy network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.221222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f-shm.mount: Deactivated successfully. Jan 28 01:23:00.221926 containerd[1726]: time="2026-01-28T01:23:00.221876687Z" level=error msg="encountered an error cleaning up failed sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.222022 containerd[1726]: time="2026-01-28T01:23:00.221942927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nm6,Uid:f68d28e5-4350-4cc7-aede-a307338915a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.223234 kubelet[3179]: E0128 01:23:00.222204 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.223234 kubelet[3179]: E0128 01:23:00.222268 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nm6" Jan 28 01:23:00.223234 kubelet[3179]: E0128 01:23:00.222289 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nm6" Jan 28 01:23:00.223385 kubelet[3179]: E0128 01:23:00.222335 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:23:00.399072 containerd[1726]: time="2026-01-28T01:23:00.398623264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lbbmn,Uid:b93794f0-c760-43f8-9817-c3814f113c55,Namespace:kube-system,Attempt:0,}" Jan 28 01:23:00.410551 containerd[1726]: time="2026-01-28T01:23:00.410509534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lmhb6,Uid:1055b396-3282-41c6-8cd5-0cd8ecaec9e4,Namespace:calico-system,Attempt:0,}" Jan 28 01:23:00.418526 containerd[1726]: time="2026-01-28T01:23:00.418490648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4f6486c-pzztc,Uid:e15a9a69-173f-490e-af7a-8a44d37eda4d,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:23:00.430385 containerd[1726]: time="2026-01-28T01:23:00.430349398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cc7cdcfb-z7vlh,Uid:7ad0c2f8-bb34-49c9-a1bb-d618f47675e5,Namespace:calico-system,Attempt:0,}" Jan 28 01:23:00.440655 containerd[1726]: time="2026-01-28T01:23:00.440608270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4f6486c-snwn4,Uid:e1527d25-60e3-4960-9f63-e5d366bf57e5,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:23:00.451342 containerd[1726]: time="2026-01-28T01:23:00.451305661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64b8f9cd5f-h7lns,Uid:0d110ad0-2f02-402c-8f06-ffae6a1d70c4,Namespace:calico-system,Attempt:0,}" Jan 28 01:23:00.472114 containerd[1726]: time="2026-01-28T01:23:00.471895765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qhjz2,Uid:fc15c614-7b8d-4699-bd70-980cb39baa43,Namespace:kube-system,Attempt:0,}" Jan 28 01:23:00.511874 containerd[1726]: time="2026-01-28T01:23:00.511740853Z" level=error msg="Failed to destroy network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.512338 containerd[1726]: time="2026-01-28T01:23:00.512111492Z" level=error msg="encountered an error cleaning up failed sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.512338 containerd[1726]: time="2026-01-28T01:23:00.512166052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lbbmn,Uid:b93794f0-c760-43f8-9817-c3814f113c55,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.512523 kubelet[3179]: E0128 01:23:00.512362 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.512523 kubelet[3179]: E0128 01:23:00.512426 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lbbmn" Jan 28 01:23:00.512523 kubelet[3179]: E0128 01:23:00.512444 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lbbmn" Jan 28 01:23:00.512741 kubelet[3179]: E0128 01:23:00.512496 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lbbmn_kube-system(b93794f0-c760-43f8-9817-c3814f113c55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lbbmn_kube-system(b93794f0-c760-43f8-9817-c3814f113c55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lbbmn" podUID="b93794f0-c760-43f8-9817-c3814f113c55" Jan 28 01:23:00.539230 containerd[1726]: time="2026-01-28T01:23:00.539119630Z" level=error msg="Failed to destroy network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.539554 containerd[1726]: time="2026-01-28T01:23:00.539530390Z" level=error msg="encountered an error cleaning up failed sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.539720 containerd[1726]: time="2026-01-28T01:23:00.539633630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lmhb6,Uid:1055b396-3282-41c6-8cd5-0cd8ecaec9e4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.539886 kubelet[3179]: E0128 01:23:00.539827 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.539886 kubelet[3179]: E0128 01:23:00.539896 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lmhb6" Jan 28 01:23:00.540049 kubelet[3179]: E0128 01:23:00.539914 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lmhb6" Jan 28 01:23:00.540049 kubelet[3179]: E0128 01:23:00.539960 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-lmhb6_calico-system(1055b396-3282-41c6-8cd5-0cd8ecaec9e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-lmhb6_calico-system(1055b396-3282-41c6-8cd5-0cd8ecaec9e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:23:00.641905 kubelet[3179]: I0128 01:23:00.641754 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:00.643276 containerd[1726]: time="2026-01-28T01:23:00.643227226Z" level=info msg="StopPodSandbox for \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\"" Jan 28 01:23:00.644262 containerd[1726]: time="2026-01-28T01:23:00.644070226Z" level=info msg="Ensure that sandbox 7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae in task-service has been cleanup successfully" Jan 28 01:23:00.645192 kubelet[3179]: I0128 01:23:00.645160 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:00.647588 containerd[1726]: time="2026-01-28T01:23:00.646542064Z" level=info msg="StopPodSandbox for \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\"" Jan 28 01:23:00.650261 containerd[1726]: time="2026-01-28T01:23:00.649933821Z" level=info msg="Ensure that sandbox 4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f in task-service has been cleanup successfully" Jan 28 01:23:00.658652 kubelet[3179]: I0128 01:23:00.658275 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:00.665049 containerd[1726]: time="2026-01-28T01:23:00.664313929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:23:00.665253 containerd[1726]: time="2026-01-28T01:23:00.665212529Z" level=error msg="Failed to destroy network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.666067 containerd[1726]: time="2026-01-28T01:23:00.666015728Z" level=info msg="StopPodSandbox for \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\"" Jan 28 01:23:00.668955 containerd[1726]: time="2026-01-28T01:23:00.667704647Z" level=info msg="Ensure that sandbox 6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f in task-service has been cleanup successfully" Jan 28 01:23:00.669443 containerd[1726]: time="2026-01-28T01:23:00.669154886Z" level=error msg="encountered an error cleaning up failed sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.669516 containerd[1726]: time="2026-01-28T01:23:00.669460325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4f6486c-pzztc,Uid:e15a9a69-173f-490e-af7a-8a44d37eda4d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.671324 kubelet[3179]: E0128 01:23:00.669911 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.671324 kubelet[3179]: E0128 01:23:00.669954 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" Jan 28 01:23:00.671324 kubelet[3179]: E0128 01:23:00.669972 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" Jan 28 01:23:00.671424 kubelet[3179]: E0128 01:23:00.670029 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69c4f6486c-pzztc_calico-apiserver(e15a9a69-173f-490e-af7a-8a44d37eda4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69c4f6486c-pzztc_calico-apiserver(e15a9a69-173f-490e-af7a-8a44d37eda4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:23:00.704849 containerd[1726]: time="2026-01-28T01:23:00.704582897Z" level=error msg="Failed to destroy network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.705077 containerd[1726]: time="2026-01-28T01:23:00.705053617Z" level=error msg="encountered an error cleaning up failed sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.705180 containerd[1726]: time="2026-01-28T01:23:00.705161656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cc7cdcfb-z7vlh,Uid:7ad0c2f8-bb34-49c9-a1bb-d618f47675e5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.705817 kubelet[3179]: E0128 01:23:00.705443 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.705817 kubelet[3179]: E0128 01:23:00.705493 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" Jan 28 01:23:00.705817 kubelet[3179]: E0128 01:23:00.705512 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" Jan 28 01:23:00.707217 kubelet[3179]: E0128 01:23:00.705559 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56cc7cdcfb-z7vlh_calico-system(7ad0c2f8-bb34-49c9-a1bb-d618f47675e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56cc7cdcfb-z7vlh_calico-system(7ad0c2f8-bb34-49c9-a1bb-d618f47675e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:23:00.737542 containerd[1726]: time="2026-01-28T01:23:00.737496790Z" level=error msg="Failed to destroy network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.738313 containerd[1726]: time="2026-01-28T01:23:00.738145190Z" level=error msg="Failed to destroy network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.738313 containerd[1726]: time="2026-01-28T01:23:00.738266230Z" level=error msg="encountered an error cleaning up failed sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.738767 containerd[1726]: time="2026-01-28T01:23:00.738741309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64b8f9cd5f-h7lns,Uid:0d110ad0-2f02-402c-8f06-ffae6a1d70c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.740997 kubelet[3179]: E0128 01:23:00.740690 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.740997 kubelet[3179]: E0128 01:23:00.740739 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64b8f9cd5f-h7lns" Jan 28 01:23:00.740997 kubelet[3179]: E0128 01:23:00.740760 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64b8f9cd5f-h7lns" Jan 28 01:23:00.741132 containerd[1726]: time="2026-01-28T01:23:00.740901068Z" level=error msg="encountered an error cleaning up failed sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.741132 containerd[1726]: time="2026-01-28T01:23:00.740943948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qhjz2,Uid:fc15c614-7b8d-4699-bd70-980cb39baa43,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.741184 kubelet[3179]: E0128 01:23:00.740806 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64b8f9cd5f-h7lns_calico-system(0d110ad0-2f02-402c-8f06-ffae6a1d70c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64b8f9cd5f-h7lns_calico-system(0d110ad0-2f02-402c-8f06-ffae6a1d70c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64b8f9cd5f-h7lns" podUID="0d110ad0-2f02-402c-8f06-ffae6a1d70c4" Jan 28 01:23:00.741804 kubelet[3179]: E0128 01:23:00.741546 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.741804 kubelet[3179]: E0128 01:23:00.741583 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qhjz2" Jan 28 01:23:00.741804 kubelet[3179]: E0128 01:23:00.741599 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qhjz2" Jan 28 01:23:00.743175 kubelet[3179]: E0128 01:23:00.741635 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qhjz2_kube-system(fc15c614-7b8d-4699-bd70-980cb39baa43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qhjz2_kube-system(fc15c614-7b8d-4699-bd70-980cb39baa43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qhjz2" podUID="fc15c614-7b8d-4699-bd70-980cb39baa43" Jan 28 01:23:00.749454 containerd[1726]: time="2026-01-28T01:23:00.749340141Z" level=error msg="StopPodSandbox for \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\" failed" error="failed to destroy network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.749883 kubelet[3179]: E0128 01:23:00.749721 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:00.749883 kubelet[3179]: E0128 01:23:00.749772 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f"} Jan 28 01:23:00.749883 kubelet[3179]: E0128 01:23:00.749815 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b93794f0-c760-43f8-9817-c3814f113c55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:23:00.749883 kubelet[3179]: E0128 01:23:00.749857 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b93794f0-c760-43f8-9817-c3814f113c55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lbbmn" podUID="b93794f0-c760-43f8-9817-c3814f113c55" Jan 28 01:23:00.750382 containerd[1726]: time="2026-01-28T01:23:00.750349860Z" level=error msg="StopPodSandbox for \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\" failed" error="failed to destroy network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.750538 kubelet[3179]: E0128 01:23:00.750497 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:00.750581 kubelet[3179]: E0128 01:23:00.750543 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae"} Jan 28 01:23:00.750581 kubelet[3179]: E0128 01:23:00.750565 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1055b396-3282-41c6-8cd5-0cd8ecaec9e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:23:00.750669 kubelet[3179]: E0128 01:23:00.750583 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1055b396-3282-41c6-8cd5-0cd8ecaec9e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:23:00.752109 containerd[1726]: time="2026-01-28T01:23:00.752067299Z" level=error msg="Failed to destroy network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.752736 containerd[1726]: time="2026-01-28T01:23:00.752702978Z" level=error msg="encountered an error cleaning up failed sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.752891 containerd[1726]: time="2026-01-28T01:23:00.752757418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4f6486c-snwn4,Uid:e1527d25-60e3-4960-9f63-e5d366bf57e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.753089 kubelet[3179]: E0128 01:23:00.753057 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.753146 kubelet[3179]: E0128 01:23:00.753105 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" Jan 28 01:23:00.753146 kubelet[3179]: E0128 01:23:00.753124 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" Jan 28 01:23:00.753241 kubelet[3179]: E0128 01:23:00.753166 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69c4f6486c-snwn4_calico-apiserver(e1527d25-60e3-4960-9f63-e5d366bf57e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69c4f6486c-snwn4_calico-apiserver(e1527d25-60e3-4960-9f63-e5d366bf57e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:23:00.767683 containerd[1726]: time="2026-01-28T01:23:00.767636806Z" level=error msg="StopPodSandbox for \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\" failed" error="failed to destroy network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:00.768013 kubelet[3179]: E0128 01:23:00.767882 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:00.768013 kubelet[3179]: E0128 01:23:00.767931 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f"} Jan 28 01:23:00.768013 kubelet[3179]: E0128 01:23:00.767961 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f68d28e5-4350-4cc7-aede-a307338915a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:23:00.768013 kubelet[3179]: E0128 01:23:00.767985 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f68d28e5-4350-4cc7-aede-a307338915a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:23:01.664007 kubelet[3179]: I0128 01:23:01.661700 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:01.664350 containerd[1726]: time="2026-01-28T01:23:01.663569883Z" level=info msg="StopPodSandbox for \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\"" Jan 28 01:23:01.664350 containerd[1726]: time="2026-01-28T01:23:01.663776603Z" level=info msg="Ensure that sandbox 24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0 in task-service has been cleanup successfully" Jan 28 01:23:01.665880 kubelet[3179]: I0128 01:23:01.665624 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:01.666746 containerd[1726]: time="2026-01-28T01:23:01.666711400Z" level=info msg="StopPodSandbox for \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\"" Jan 28 01:23:01.667063 containerd[1726]: time="2026-01-28T01:23:01.666894880Z" level=info msg="Ensure that sandbox 2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee in task-service has been cleanup successfully" Jan 28 01:23:01.668307 kubelet[3179]: I0128 01:23:01.668280 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:01.669431 containerd[1726]: time="2026-01-28T01:23:01.669406678Z" level=info msg="StopPodSandbox for \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\"" Jan 28 01:23:01.670460 kubelet[3179]: I0128 01:23:01.670091 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:01.670716 containerd[1726]: time="2026-01-28T01:23:01.670692597Z" level=info msg="Ensure that sandbox 560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25 in task-service has been cleanup successfully" Jan 28 01:23:01.671812 containerd[1726]: time="2026-01-28T01:23:01.671783796Z" level=info msg="StopPodSandbox for \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\"" Jan 28 01:23:01.671965 containerd[1726]: time="2026-01-28T01:23:01.671945596Z" level=info msg="Ensure that sandbox d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc in task-service has been cleanup successfully" Jan 28 01:23:01.676745 kubelet[3179]: I0128 01:23:01.676713 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:01.677787 containerd[1726]: time="2026-01-28T01:23:01.677460152Z" level=info msg="StopPodSandbox for \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\"" Jan 28 01:23:01.677787 containerd[1726]: time="2026-01-28T01:23:01.677615311Z" level=info msg="Ensure that sandbox 6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837 in task-service has been cleanup successfully" Jan 28 01:23:01.728449 containerd[1726]: time="2026-01-28T01:23:01.728398990Z" level=error msg="StopPodSandbox for \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\" failed" error="failed to destroy network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:01.728941 kubelet[3179]: E0128 01:23:01.728906 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:01.729072 kubelet[3179]: E0128 01:23:01.729052 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee"} Jan 28 01:23:01.729158 kubelet[3179]: E0128 01:23:01.729145 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1527d25-60e3-4960-9f63-e5d366bf57e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:23:01.729369 kubelet[3179]: E0128 01:23:01.729219 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1527d25-60e3-4960-9f63-e5d366bf57e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:23:01.729433 containerd[1726]: time="2026-01-28T01:23:01.729259830Z" level=error msg="StopPodSandbox for \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\" failed" error="failed to destroy network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:01.730918 kubelet[3179]: E0128 01:23:01.729944 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:01.730918 kubelet[3179]: E0128 01:23:01.729980 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25"} Jan 28 01:23:01.730918 kubelet[3179]: E0128 01:23:01.730002 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc15c614-7b8d-4699-bd70-980cb39baa43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:23:01.730918 kubelet[3179]: E0128 01:23:01.730022 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc15c614-7b8d-4699-bd70-980cb39baa43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qhjz2" podUID="fc15c614-7b8d-4699-bd70-980cb39baa43" Jan 28 01:23:01.734362 containerd[1726]: time="2026-01-28T01:23:01.734323906Z" level=error msg="StopPodSandbox for \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\" failed" error="failed to destroy network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:01.734676 kubelet[3179]: E0128 01:23:01.734498 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:01.734676 kubelet[3179]: E0128 01:23:01.734527 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0"} Jan 28 01:23:01.734676 kubelet[3179]: E0128 01:23:01.734550 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:23:01.734676 kubelet[3179]: E0128 01:23:01.734569 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64b8f9cd5f-h7lns" podUID="0d110ad0-2f02-402c-8f06-ffae6a1d70c4" Jan 28 01:23:01.740197 containerd[1726]: time="2026-01-28T01:23:01.740157101Z" level=error msg="StopPodSandbox for \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\" failed" error="failed to destroy network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:01.740692 kubelet[3179]: E0128 01:23:01.740559 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:01.740692 kubelet[3179]: E0128 01:23:01.740613 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc"} Jan 28 01:23:01.740692 kubelet[3179]: E0128 01:23:01.740641 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ad0c2f8-bb34-49c9-a1bb-d618f47675e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:23:01.740692 kubelet[3179]: E0128 01:23:01.740668 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ad0c2f8-bb34-49c9-a1bb-d618f47675e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:23:01.745294 containerd[1726]: time="2026-01-28T01:23:01.745257457Z" level=error msg="StopPodSandbox for \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\" failed" error="failed to destroy network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:23:01.745674 kubelet[3179]: E0128 01:23:01.745556 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:01.745674 kubelet[3179]: E0128 01:23:01.745595 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837"} Jan 28 01:23:01.745674 kubelet[3179]: E0128 01:23:01.745620 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e15a9a69-173f-490e-af7a-8a44d37eda4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:23:01.745674 kubelet[3179]: E0128 01:23:01.745639 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e15a9a69-173f-490e-af7a-8a44d37eda4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:23:06.544799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807159400.mount: Deactivated successfully. Jan 28 01:23:06.920135 containerd[1726]: time="2026-01-28T01:23:06.919286800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:23:06.922426 containerd[1726]: time="2026-01-28T01:23:06.921808558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 28 01:23:06.925869 containerd[1726]: time="2026-01-28T01:23:06.925217996Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:23:06.929749 containerd[1726]: time="2026-01-28T01:23:06.929697992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:23:06.930431 containerd[1726]: time="2026-01-28T01:23:06.930218192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.265866143s" Jan 28 01:23:06.930431 containerd[1726]: time="2026-01-28T01:23:06.930253072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 28 01:23:06.963897 containerd[1726]: time="2026-01-28T01:23:06.963826884Z" level=info msg="CreateContainer within sandbox \"50b835795cfeba735f8e9cf9614aef544c9650ea739c67765982da4e86b4c201\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:23:07.004435 containerd[1726]: time="2026-01-28T01:23:06.998406977Z" level=info msg="CreateContainer within sandbox \"50b835795cfeba735f8e9cf9614aef544c9650ea739c67765982da4e86b4c201\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1b4c30c82aa5d88edab6f9afab89cbdb6f207365a307f1c57c1caa72ff6cec3e\"" Jan 28 01:23:07.005765 containerd[1726]: time="2026-01-28T01:23:07.005735651Z" level=info msg="StartContainer for \"1b4c30c82aa5d88edab6f9afab89cbdb6f207365a307f1c57c1caa72ff6cec3e\"" Jan 28 01:23:07.037034 systemd[1]: Started cri-containerd-1b4c30c82aa5d88edab6f9afab89cbdb6f207365a307f1c57c1caa72ff6cec3e.scope - libcontainer container 1b4c30c82aa5d88edab6f9afab89cbdb6f207365a307f1c57c1caa72ff6cec3e. Jan 28 01:23:07.067158 containerd[1726]: time="2026-01-28T01:23:07.067108045Z" level=info msg="StartContainer for \"1b4c30c82aa5d88edab6f9afab89cbdb6f207365a307f1c57c1caa72ff6cec3e\" returns successfully" Jan 28 01:23:07.459333 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:23:07.459469 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:23:07.584004 containerd[1726]: time="2026-01-28T01:23:07.583775592Z" level=info msg="StopPodSandbox for \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\"" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.671 [INFO][4406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.671 [INFO][4406] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" iface="eth0" netns="/var/run/netns/cni-ac9a0dd6-2bd0-755c-ff95-408aca9710d5" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.671 [INFO][4406] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" iface="eth0" netns="/var/run/netns/cni-ac9a0dd6-2bd0-755c-ff95-408aca9710d5" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.673 [INFO][4406] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" iface="eth0" netns="/var/run/netns/cni-ac9a0dd6-2bd0-755c-ff95-408aca9710d5" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.673 [INFO][4406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.673 [INFO][4406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.707 [INFO][4418] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" HandleID="k8s-pod-network.24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.707 [INFO][4418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.708 [INFO][4418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.717 [WARNING][4418] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" HandleID="k8s-pod-network.24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.717 [INFO][4418] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" HandleID="k8s-pod-network.24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.720 [INFO][4418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:07.728269 containerd[1726]: 2026-01-28 01:23:07.725 [INFO][4406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:07.729277 containerd[1726]: time="2026-01-28T01:23:07.729063221Z" level=info msg="TearDown network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\" successfully" Jan 28 01:23:07.729277 containerd[1726]: time="2026-01-28T01:23:07.729103301Z" level=info msg="StopPodSandbox for \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\" returns successfully" Jan 28 01:23:07.732488 systemd[1]: run-netns-cni\x2dac9a0dd6\x2d2bd0\x2d755c\x2dff95\x2d408aca9710d5.mount: Deactivated successfully. Jan 28 01:23:07.743174 kubelet[3179]: I0128 01:23:07.743100 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nwgmg" podStartSLOduration=1.812813928 podStartE2EDuration="19.743086732s" podCreationTimestamp="2026-01-28 01:22:48 +0000 UTC" firstStartedPulling="2026-01-28 01:22:49.007667181 +0000 UTC m=+30.602910289" lastFinishedPulling="2026-01-28 01:23:06.937939985 +0000 UTC m=+48.533183093" observedRunningTime="2026-01-28 01:23:07.741562533 +0000 UTC m=+49.336805641" watchObservedRunningTime="2026-01-28 01:23:07.743086732 +0000 UTC m=+49.338329840" Jan 28 01:23:08.039703 kubelet[3179]: I0128 01:23:08.039569 3179 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0d110ad0-2f02-402c-8f06-ffae6a1d70c4" (UID: "0d110ad0-2f02-402c-8f06-ffae6a1d70c4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:23:08.039824 kubelet[3179]: I0128 01:23:08.039804 3179 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-whisker-ca-bundle\") pod \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\" (UID: \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\") " Jan 28 01:23:08.039902 kubelet[3179]: I0128 01:23:08.039874 3179 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-whisker-backend-key-pair\") pod \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\" (UID: \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\") " Jan 28 01:23:08.039902 kubelet[3179]: I0128 01:23:08.039901 3179 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc5c8\" (UniqueName: \"kubernetes.io/projected/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-kube-api-access-bc5c8\") pod \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\" (UID: \"0d110ad0-2f02-402c-8f06-ffae6a1d70c4\") " Jan 28 01:23:08.040313 kubelet[3179]: I0128 01:23:08.039964 3179 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-whisker-ca-bundle\") on node \"ci-4081.3.6-n-6d8ceced70\" DevicePath \"\"" Jan 28 01:23:08.044078 kubelet[3179]: I0128 01:23:08.044045 3179 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0d110ad0-2f02-402c-8f06-ffae6a1d70c4" (UID: "0d110ad0-2f02-402c-8f06-ffae6a1d70c4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:23:08.044294 kubelet[3179]: I0128 01:23:08.044144 3179 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-kube-api-access-bc5c8" (OuterVolumeSpecName: "kube-api-access-bc5c8") pod "0d110ad0-2f02-402c-8f06-ffae6a1d70c4" (UID: "0d110ad0-2f02-402c-8f06-ffae6a1d70c4"). InnerVolumeSpecName "kube-api-access-bc5c8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:23:08.044779 systemd[1]: var-lib-kubelet-pods-0d110ad0\x2d2f02\x2d402c\x2d8f06\x2dffae6a1d70c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbc5c8.mount: Deactivated successfully. Jan 28 01:23:08.045082 systemd[1]: var-lib-kubelet-pods-0d110ad0\x2d2f02\x2d402c\x2d8f06\x2dffae6a1d70c4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:23:08.140609 kubelet[3179]: I0128 01:23:08.140572 3179 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-6d8ceced70\" DevicePath \"\"" Jan 28 01:23:08.140609 kubelet[3179]: I0128 01:23:08.140605 3179 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bc5c8\" (UniqueName: \"kubernetes.io/projected/0d110ad0-2f02-402c-8f06-ffae6a1d70c4-kube-api-access-bc5c8\") on node \"ci-4081.3.6-n-6d8ceced70\" DevicePath \"\"" Jan 28 01:23:08.528531 systemd[1]: Removed slice kubepods-besteffort-pod0d110ad0_2f02_402c_8f06_ffae6a1d70c4.slice - libcontainer container kubepods-besteffort-pod0d110ad0_2f02_402c_8f06_ffae6a1d70c4.slice. Jan 28 01:23:08.737610 systemd[1]: run-containerd-runc-k8s.io-1b4c30c82aa5d88edab6f9afab89cbdb6f207365a307f1c57c1caa72ff6cec3e-runc.tqMARV.mount: Deactivated successfully. Jan 28 01:23:08.791690 systemd[1]: Created slice kubepods-besteffort-pod0439c29d_4b7c_4f38_8c80_be3fa0839945.slice - libcontainer container kubepods-besteffort-pod0439c29d_4b7c_4f38_8c80_be3fa0839945.slice. Jan 28 01:23:08.846187 kubelet[3179]: I0128 01:23:08.846045 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gfrg\" (UniqueName: \"kubernetes.io/projected/0439c29d-4b7c-4f38-8c80-be3fa0839945-kube-api-access-4gfrg\") pod \"whisker-5dd96f4d7f-sqvjh\" (UID: \"0439c29d-4b7c-4f38-8c80-be3fa0839945\") " pod="calico-system/whisker-5dd96f4d7f-sqvjh" Jan 28 01:23:08.846187 kubelet[3179]: I0128 01:23:08.846091 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0439c29d-4b7c-4f38-8c80-be3fa0839945-whisker-backend-key-pair\") pod \"whisker-5dd96f4d7f-sqvjh\" (UID: \"0439c29d-4b7c-4f38-8c80-be3fa0839945\") " pod="calico-system/whisker-5dd96f4d7f-sqvjh" Jan 28 01:23:08.846187 kubelet[3179]: I0128 01:23:08.846109 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0439c29d-4b7c-4f38-8c80-be3fa0839945-whisker-ca-bundle\") pod \"whisker-5dd96f4d7f-sqvjh\" (UID: \"0439c29d-4b7c-4f38-8c80-be3fa0839945\") " pod="calico-system/whisker-5dd96f4d7f-sqvjh" Jan 28 01:23:09.116162 containerd[1726]: time="2026-01-28T01:23:09.116061988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd96f4d7f-sqvjh,Uid:0439c29d-4b7c-4f38-8c80-be3fa0839945,Namespace:calico-system,Attempt:0,}" Jan 28 01:23:09.338871 kernel: bpftool[4608]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 01:23:09.636593 systemd-networkd[1339]: vxlan.calico: Link UP Jan 28 01:23:09.636599 systemd-networkd[1339]: vxlan.calico: Gained carrier Jan 28 01:23:09.791567 systemd-networkd[1339]: cali11f61a4a4c5: Link UP Jan 28 01:23:09.792573 systemd-networkd[1339]: cali11f61a4a4c5: Gained carrier Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.696 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0 whisker-5dd96f4d7f- calico-system 0439c29d-4b7c-4f38-8c80-be3fa0839945 954 0 2026-01-28 01:23:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5dd96f4d7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-6d8ceced70 whisker-5dd96f4d7f-sqvjh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali11f61a4a4c5 [] [] }} ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Namespace="calico-system" Pod="whisker-5dd96f4d7f-sqvjh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.697 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Namespace="calico-system" Pod="whisker-5dd96f4d7f-sqvjh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.743 [INFO][4655] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" HandleID="k8s-pod-network.0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.743 [INFO][4655] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" HandleID="k8s-pod-network.0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-6d8ceced70", "pod":"whisker-5dd96f4d7f-sqvjh", "timestamp":"2026-01-28 01:23:09.743495851 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d8ceced70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.743 [INFO][4655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.743 [INFO][4655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.743 [INFO][4655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d8ceced70' Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.752 [INFO][4655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.756 [INFO][4655] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.760 [INFO][4655] ipam/ipam.go 511: Trying affinity for 192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.762 [INFO][4655] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.764 [INFO][4655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.764 [INFO][4655] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.64/26 handle="k8s-pod-network.0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.767 [INFO][4655] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.772 [INFO][4655] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.64/26 handle="k8s-pod-network.0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.783 [INFO][4655] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.65/26] block=192.168.11.64/26 handle="k8s-pod-network.0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.783 [INFO][4655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.65/26] handle="k8s-pod-network.0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.783 [INFO][4655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:09.822957 containerd[1726]: 2026-01-28 01:23:09.783 [INFO][4655] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.65/26] IPv6=[] ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" HandleID="k8s-pod-network.0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" Jan 28 01:23:09.823552 containerd[1726]: 2026-01-28 01:23:09.786 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Namespace="calico-system" Pod="whisker-5dd96f4d7f-sqvjh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0", GenerateName:"whisker-5dd96f4d7f-", Namespace:"calico-system", SelfLink:"", UID:"0439c29d-4b7c-4f38-8c80-be3fa0839945", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dd96f4d7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"", Pod:"whisker-5dd96f4d7f-sqvjh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.11.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali11f61a4a4c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:09.823552 containerd[1726]: 2026-01-28 01:23:09.787 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.65/32] ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Namespace="calico-system" Pod="whisker-5dd96f4d7f-sqvjh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" Jan 28 01:23:09.823552 containerd[1726]: 2026-01-28 01:23:09.787 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11f61a4a4c5 ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Namespace="calico-system" Pod="whisker-5dd96f4d7f-sqvjh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" Jan 28 01:23:09.823552 containerd[1726]: 2026-01-28 01:23:09.792 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Namespace="calico-system" Pod="whisker-5dd96f4d7f-sqvjh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" Jan 28 01:23:09.823552 containerd[1726]: 2026-01-28 01:23:09.793 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Namespace="calico-system" Pod="whisker-5dd96f4d7f-sqvjh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0", GenerateName:"whisker-5dd96f4d7f-", Namespace:"calico-system", SelfLink:"", UID:"0439c29d-4b7c-4f38-8c80-be3fa0839945", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dd96f4d7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe", Pod:"whisker-5dd96f4d7f-sqvjh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.11.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali11f61a4a4c5", MAC:"6a:10:ac:28:bf:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:09.823552 containerd[1726]: 2026-01-28 01:23:09.818 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe" Namespace="calico-system" Pod="whisker-5dd96f4d7f-sqvjh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--5dd96f4d7f--sqvjh-eth0" Jan 28 01:23:09.871714 containerd[1726]: time="2026-01-28T01:23:09.870416295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:23:09.871714 containerd[1726]: time="2026-01-28T01:23:09.870479295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:23:09.871714 containerd[1726]: time="2026-01-28T01:23:09.870495095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:09.871714 containerd[1726]: time="2026-01-28T01:23:09.870591214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:09.901150 systemd[1]: run-containerd-runc-k8s.io-0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe-runc.T4znKr.mount: Deactivated successfully. Jan 28 01:23:09.916033 systemd[1]: Started cri-containerd-0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe.scope - libcontainer container 0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe. Jan 28 01:23:09.971471 containerd[1726]: time="2026-01-28T01:23:09.971232434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd96f4d7f-sqvjh,Uid:0439c29d-4b7c-4f38-8c80-be3fa0839945,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f231f596ac8749ae70ca578cfaafd0763783ed20d39461a223b2e62fdb5b7fe\"" Jan 28 01:23:09.975185 containerd[1726]: time="2026-01-28T01:23:09.973734073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:23:10.364481 containerd[1726]: time="2026-01-28T01:23:10.364434518Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:10.367171 containerd[1726]: time="2026-01-28T01:23:10.367120356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:23:10.367254 containerd[1726]: time="2026-01-28T01:23:10.367228836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:23:10.367591 kubelet[3179]: E0128 01:23:10.367396 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:23:10.367591 kubelet[3179]: E0128 01:23:10.367447 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:23:10.369628 kubelet[3179]: E0128 01:23:10.369575 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dd96f4d7f-sqvjh_calico-system(0439c29d-4b7c-4f38-8c80-be3fa0839945): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:10.370616 containerd[1726]: time="2026-01-28T01:23:10.370591794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:23:10.520466 kubelet[3179]: I0128 01:23:10.520276 3179 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d110ad0-2f02-402c-8f06-ffae6a1d70c4" path="/var/lib/kubelet/pods/0d110ad0-2f02-402c-8f06-ffae6a1d70c4/volumes" Jan 28 01:23:10.649126 containerd[1726]: time="2026-01-28T01:23:10.649008067Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:10.651544 containerd[1726]: time="2026-01-28T01:23:10.651502306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:23:10.651640 containerd[1726]: time="2026-01-28T01:23:10.651613665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:23:10.651792 kubelet[3179]: E0128 01:23:10.651761 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:23:10.651895 kubelet[3179]: E0128 01:23:10.651803 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:23:10.651895 kubelet[3179]: E0128 01:23:10.651887 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dd96f4d7f-sqvjh_calico-system(0439c29d-4b7c-4f38-8c80-be3fa0839945): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:10.651980 kubelet[3179]: E0128 01:23:10.651925 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:23:10.705316 kubelet[3179]: E0128 01:23:10.705065 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:23:11.028023 systemd-networkd[1339]: vxlan.calico: Gained IPv6LL Jan 28 01:23:11.092020 systemd-networkd[1339]: cali11f61a4a4c5: Gained IPv6LL Jan 28 01:23:11.705730 kubelet[3179]: E0128 01:23:11.705583 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:23:12.519509 containerd[1726]: time="2026-01-28T01:23:12.519367264Z" level=info msg="StopPodSandbox for \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\"" Jan 28 01:23:12.520826 containerd[1726]: time="2026-01-28T01:23:12.520734263Z" level=info msg="StopPodSandbox for \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\"" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.575 [INFO][4772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.575 [INFO][4772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" iface="eth0" netns="/var/run/netns/cni-27ae8b7a-b864-fa18-d787-5b2af0c07af0" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.575 [INFO][4772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" iface="eth0" netns="/var/run/netns/cni-27ae8b7a-b864-fa18-d787-5b2af0c07af0" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.576 [INFO][4772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" iface="eth0" netns="/var/run/netns/cni-27ae8b7a-b864-fa18-d787-5b2af0c07af0" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.576 [INFO][4772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.576 [INFO][4772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.604 [INFO][4789] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" HandleID="k8s-pod-network.6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.604 [INFO][4789] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.604 [INFO][4789] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.612 [WARNING][4789] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" HandleID="k8s-pod-network.6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.612 [INFO][4789] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" HandleID="k8s-pod-network.6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.614 [INFO][4789] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:12.619225 containerd[1726]: 2026-01-28 01:23:12.615 [INFO][4772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:12.623039 containerd[1726]: time="2026-01-28T01:23:12.619775244Z" level=info msg="TearDown network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\" successfully" Jan 28 01:23:12.623039 containerd[1726]: time="2026-01-28T01:23:12.622905002Z" level=info msg="StopPodSandbox for \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\" returns successfully" Jan 28 01:23:12.623774 systemd[1]: run-netns-cni\x2d27ae8b7a\x2db864\x2dfa18\x2dd787\x2d5b2af0c07af0.mount: Deactivated successfully. Jan 28 01:23:12.629630 containerd[1726]: time="2026-01-28T01:23:12.629251958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4f6486c-pzztc,Uid:e15a9a69-173f-490e-af7a-8a44d37eda4d,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.581 [INFO][4780] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.582 [INFO][4780] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" iface="eth0" netns="/var/run/netns/cni-da3a326c-fde8-dfd2-610c-4d29a35b4a70" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.582 [INFO][4780] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" iface="eth0" netns="/var/run/netns/cni-da3a326c-fde8-dfd2-610c-4d29a35b4a70" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.583 [INFO][4780] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" iface="eth0" netns="/var/run/netns/cni-da3a326c-fde8-dfd2-610c-4d29a35b4a70" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.583 [INFO][4780] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.583 [INFO][4780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.608 [INFO][4794] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" HandleID="k8s-pod-network.d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.608 [INFO][4794] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.614 [INFO][4794] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.628 [WARNING][4794] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" HandleID="k8s-pod-network.d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.628 [INFO][4794] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" HandleID="k8s-pod-network.d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.630 [INFO][4794] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:12.634461 containerd[1726]: 2026-01-28 01:23:12.632 [INFO][4780] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:12.634965 containerd[1726]: time="2026-01-28T01:23:12.634933434Z" level=info msg="TearDown network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\" successfully" Jan 28 01:23:12.635006 containerd[1726]: time="2026-01-28T01:23:12.634964314Z" level=info msg="StopPodSandbox for \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\" returns successfully" Jan 28 01:23:12.637197 systemd[1]: run-netns-cni\x2dda3a326c\x2dfde8\x2ddfd2\x2d610c\x2d4d29a35b4a70.mount: Deactivated successfully. Jan 28 01:23:12.647745 containerd[1726]: time="2026-01-28T01:23:12.647710907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cc7cdcfb-z7vlh,Uid:7ad0c2f8-bb34-49c9-a1bb-d618f47675e5,Namespace:calico-system,Attempt:1,}" Jan 28 01:23:12.821360 systemd-networkd[1339]: calid58775f0319: Link UP Jan 28 01:23:12.823146 systemd-networkd[1339]: calid58775f0319: Gained carrier Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.727 [INFO][4803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0 calico-apiserver-69c4f6486c- calico-apiserver e15a9a69-173f-490e-af7a-8a44d37eda4d 988 0 2026-01-28 01:22:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69c4f6486c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-6d8ceced70 calico-apiserver-69c4f6486c-pzztc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid58775f0319 [] [] }} ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-pzztc" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.727 [INFO][4803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-pzztc" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.764 [INFO][4826] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" HandleID="k8s-pod-network.e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.764 [INFO][4826] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" HandleID="k8s-pod-network.e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-6d8ceced70", "pod":"calico-apiserver-69c4f6486c-pzztc", "timestamp":"2026-01-28 01:23:12.764154837 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d8ceced70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.764 [INFO][4826] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.764 [INFO][4826] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.764 [INFO][4826] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d8ceced70' Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.779 [INFO][4826] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.790 [INFO][4826] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.795 [INFO][4826] ipam/ipam.go 511: Trying affinity for 192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.797 [INFO][4826] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.799 [INFO][4826] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.799 [INFO][4826] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.64/26 handle="k8s-pod-network.e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.801 [INFO][4826] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2 Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.809 [INFO][4826] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.64/26 handle="k8s-pod-network.e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.814 [INFO][4826] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.66/26] block=192.168.11.64/26 handle="k8s-pod-network.e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.814 [INFO][4826] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.66/26] handle="k8s-pod-network.e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.814 [INFO][4826] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:12.845680 containerd[1726]: 2026-01-28 01:23:12.814 [INFO][4826] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.66/26] IPv6=[] ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" HandleID="k8s-pod-network.e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.846259 containerd[1726]: 2026-01-28 01:23:12.819 [INFO][4803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-pzztc" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0", GenerateName:"calico-apiserver-69c4f6486c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e15a9a69-173f-490e-af7a-8a44d37eda4d", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4f6486c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"", Pod:"calico-apiserver-69c4f6486c-pzztc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58775f0319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:12.846259 containerd[1726]: 2026-01-28 01:23:12.819 [INFO][4803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.66/32] ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-pzztc" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.846259 containerd[1726]: 2026-01-28 01:23:12.819 [INFO][4803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid58775f0319 ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-pzztc" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.846259 containerd[1726]: 2026-01-28 01:23:12.824 [INFO][4803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-pzztc" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.846259 containerd[1726]: 2026-01-28 01:23:12.825 [INFO][4803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-pzztc" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0", GenerateName:"calico-apiserver-69c4f6486c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e15a9a69-173f-490e-af7a-8a44d37eda4d", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4f6486c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2", Pod:"calico-apiserver-69c4f6486c-pzztc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58775f0319", MAC:"16:83:39:7b:21:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:12.846259 containerd[1726]: 2026-01-28 01:23:12.843 [INFO][4803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-pzztc" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:12.863648 containerd[1726]: time="2026-01-28T01:23:12.863558817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:23:12.863881 containerd[1726]: time="2026-01-28T01:23:12.863756377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:23:12.863881 containerd[1726]: time="2026-01-28T01:23:12.863793937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:12.864051 containerd[1726]: time="2026-01-28T01:23:12.864016417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:12.881065 systemd[1]: Started cri-containerd-e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2.scope - libcontainer container e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2. Jan 28 01:23:12.940909 systemd-networkd[1339]: caliec74292f2be: Link UP Jan 28 01:23:12.943278 systemd-networkd[1339]: caliec74292f2be: Gained carrier Jan 28 01:23:12.949712 containerd[1726]: time="2026-01-28T01:23:12.949663725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4f6486c-pzztc,Uid:e15a9a69-173f-490e-af7a-8a44d37eda4d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2\"" Jan 28 01:23:12.958252 containerd[1726]: time="2026-01-28T01:23:12.957963160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.746 [INFO][4814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0 calico-kube-controllers-56cc7cdcfb- calico-system 7ad0c2f8-bb34-49c9-a1bb-d618f47675e5 989 0 2026-01-28 01:22:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56cc7cdcfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-6d8ceced70 calico-kube-controllers-56cc7cdcfb-z7vlh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliec74292f2be [] [] }} ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Namespace="calico-system" Pod="calico-kube-controllers-56cc7cdcfb-z7vlh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.746 [INFO][4814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Namespace="calico-system" Pod="calico-kube-controllers-56cc7cdcfb-z7vlh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.789 [INFO][4831] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" HandleID="k8s-pod-network.d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.790 [INFO][4831] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" HandleID="k8s-pod-network.d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-6d8ceced70", "pod":"calico-kube-controllers-56cc7cdcfb-z7vlh", "timestamp":"2026-01-28 01:23:12.789761301 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d8ceced70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.790 [INFO][4831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.814 [INFO][4831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.814 [INFO][4831] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d8ceced70' Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.881 [INFO][4831] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.893 [INFO][4831] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.898 [INFO][4831] ipam/ipam.go 511: Trying affinity for 192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.901 [INFO][4831] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.904 [INFO][4831] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.904 [INFO][4831] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.64/26 handle="k8s-pod-network.d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.906 [INFO][4831] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102 Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.914 [INFO][4831] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.64/26 handle="k8s-pod-network.d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.926 [INFO][4831] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.67/26] block=192.168.11.64/26 handle="k8s-pod-network.d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.926 [INFO][4831] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.67/26] handle="k8s-pod-network.d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.926 [INFO][4831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:12.982091 containerd[1726]: 2026-01-28 01:23:12.927 [INFO][4831] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.67/26] IPv6=[] ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" HandleID="k8s-pod-network.d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:12.982681 containerd[1726]: 2026-01-28 01:23:12.935 [INFO][4814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Namespace="calico-system" Pod="calico-kube-controllers-56cc7cdcfb-z7vlh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0", GenerateName:"calico-kube-controllers-56cc7cdcfb-", Namespace:"calico-system", SelfLink:"", UID:"7ad0c2f8-bb34-49c9-a1bb-d618f47675e5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cc7cdcfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"", Pod:"calico-kube-controllers-56cc7cdcfb-z7vlh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliec74292f2be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:12.982681 containerd[1726]: 2026-01-28 01:23:12.935 [INFO][4814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.67/32] ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Namespace="calico-system" Pod="calico-kube-controllers-56cc7cdcfb-z7vlh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:12.982681 containerd[1726]: 2026-01-28 01:23:12.935 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec74292f2be ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Namespace="calico-system" Pod="calico-kube-controllers-56cc7cdcfb-z7vlh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:12.982681 containerd[1726]: 2026-01-28 01:23:12.945 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Namespace="calico-system" Pod="calico-kube-controllers-56cc7cdcfb-z7vlh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:12.982681 containerd[1726]: 2026-01-28 01:23:12.949 [INFO][4814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Namespace="calico-system" Pod="calico-kube-controllers-56cc7cdcfb-z7vlh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0", GenerateName:"calico-kube-controllers-56cc7cdcfb-", Namespace:"calico-system", SelfLink:"", UID:"7ad0c2f8-bb34-49c9-a1bb-d618f47675e5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cc7cdcfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102", Pod:"calico-kube-controllers-56cc7cdcfb-z7vlh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliec74292f2be", MAC:"1a:1b:5c:3b:98:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:12.982681 containerd[1726]: 2026-01-28 01:23:12.977 [INFO][4814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102" Namespace="calico-system" Pod="calico-kube-controllers-56cc7cdcfb-z7vlh" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:13.014053 containerd[1726]: time="2026-01-28T01:23:13.013730847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:23:13.014053 containerd[1726]: time="2026-01-28T01:23:13.013786447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:23:13.014053 containerd[1726]: time="2026-01-28T01:23:13.013877327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:13.014649 containerd[1726]: time="2026-01-28T01:23:13.014457167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:13.036019 systemd[1]: Started cri-containerd-d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102.scope - libcontainer container d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102. Jan 28 01:23:13.069823 containerd[1726]: time="2026-01-28T01:23:13.069781733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56cc7cdcfb-z7vlh,Uid:7ad0c2f8-bb34-49c9-a1bb-d618f47675e5,Namespace:calico-system,Attempt:1,} returns sandbox id \"d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102\"" Jan 28 01:23:13.221882 containerd[1726]: time="2026-01-28T01:23:13.220930043Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:13.223990 containerd[1726]: time="2026-01-28T01:23:13.223892881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:23:13.224082 containerd[1726]: time="2026-01-28T01:23:13.223972961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:23:13.224170 kubelet[3179]: E0128 01:23:13.224131 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:13.224435 kubelet[3179]: E0128 01:23:13.224178 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:13.224435 kubelet[3179]: E0128 01:23:13.224348 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69c4f6486c-pzztc_calico-apiserver(e15a9a69-173f-490e-af7a-8a44d37eda4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:13.224435 kubelet[3179]: E0128 01:23:13.224423 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:23:13.224880 containerd[1726]: time="2026-01-28T01:23:13.224850760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:23:13.475922 containerd[1726]: time="2026-01-28T01:23:13.475783689Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:13.478624 containerd[1726]: time="2026-01-28T01:23:13.478575208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:23:13.478693 containerd[1726]: time="2026-01-28T01:23:13.478678208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:23:13.479206 kubelet[3179]: E0128 01:23:13.478818 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:23:13.479206 kubelet[3179]: E0128 01:23:13.478879 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:23:13.479206 kubelet[3179]: E0128 01:23:13.478949 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-56cc7cdcfb-z7vlh_calico-system(7ad0c2f8-bb34-49c9-a1bb-d618f47675e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:13.479206 kubelet[3179]: E0128 01:23:13.478979 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:23:13.520014 containerd[1726]: time="2026-01-28T01:23:13.519412383Z" level=info msg="StopPodSandbox for \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\"" Jan 28 01:23:13.520014 containerd[1726]: time="2026-01-28T01:23:13.519817383Z" level=info msg="StopPodSandbox for \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\"" Jan 28 01:23:13.523771 containerd[1726]: time="2026-01-28T01:23:13.523558981Z" level=info msg="StopPodSandbox for \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\"" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.591 [INFO][4972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.592 [INFO][4972] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" iface="eth0" netns="/var/run/netns/cni-9a1e61de-9e43-caba-dd53-de4d83acfd5a" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.592 [INFO][4972] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" iface="eth0" netns="/var/run/netns/cni-9a1e61de-9e43-caba-dd53-de4d83acfd5a" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.593 [INFO][4972] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" iface="eth0" netns="/var/run/netns/cni-9a1e61de-9e43-caba-dd53-de4d83acfd5a" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.593 [INFO][4972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.593 [INFO][4972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.629 [INFO][4989] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" HandleID="k8s-pod-network.6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.629 [INFO][4989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.629 [INFO][4989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.649 [WARNING][4989] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" HandleID="k8s-pod-network.6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.649 [INFO][4989] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" HandleID="k8s-pod-network.6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.652 [INFO][4989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:13.658076 containerd[1726]: 2026-01-28 01:23:13.655 [INFO][4972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:13.660799 systemd[1]: run-netns-cni\x2d9a1e61de\x2d9e43\x2dcaba\x2ddd53\x2dde4d83acfd5a.mount: Deactivated successfully. Jan 28 01:23:13.662996 containerd[1726]: time="2026-01-28T01:23:13.662957657Z" level=info msg="TearDown network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\" successfully" Jan 28 01:23:13.663139 containerd[1726]: time="2026-01-28T01:23:13.663125257Z" level=info msg="StopPodSandbox for \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\" returns successfully" Jan 28 01:23:13.669646 containerd[1726]: time="2026-01-28T01:23:13.669610893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nm6,Uid:f68d28e5-4350-4cc7-aede-a307338915a7,Namespace:calico-system,Attempt:1,}" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.635 [INFO][4971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.635 [INFO][4971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" iface="eth0" netns="/var/run/netns/cni-28c86598-c252-4310-7fec-89bae0af18a2" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.635 [INFO][4971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" iface="eth0" netns="/var/run/netns/cni-28c86598-c252-4310-7fec-89bae0af18a2" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.635 [INFO][4971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" iface="eth0" netns="/var/run/netns/cni-28c86598-c252-4310-7fec-89bae0af18a2" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.635 [INFO][4971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.635 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.673 [INFO][5000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" HandleID="k8s-pod-network.2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.673 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.673 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.685 [WARNING][5000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" HandleID="k8s-pod-network.2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.685 [INFO][5000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" HandleID="k8s-pod-network.2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.687 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:13.693958 containerd[1726]: 2026-01-28 01:23:13.690 [INFO][4971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:13.694357 containerd[1726]: time="2026-01-28T01:23:13.694078958Z" level=info msg="TearDown network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\" successfully" Jan 28 01:23:13.694357 containerd[1726]: time="2026-01-28T01:23:13.694114198Z" level=info msg="StopPodSandbox for \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\" returns successfully" Jan 28 01:23:13.698195 systemd[1]: run-netns-cni\x2d28c86598\x2dc252\x2d4310\x2d7fec\x2d89bae0af18a2.mount: Deactivated successfully. Jan 28 01:23:13.703053 containerd[1726]: time="2026-01-28T01:23:13.702124234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4f6486c-snwn4,Uid:e1527d25-60e3-4960-9f63-e5d366bf57e5,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.647 [INFO][4973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.648 [INFO][4973] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" iface="eth0" netns="/var/run/netns/cni-b1889908-0485-6d66-5ed0-27af15530dde" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.648 [INFO][4973] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" iface="eth0" netns="/var/run/netns/cni-b1889908-0485-6d66-5ed0-27af15530dde" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.648 [INFO][4973] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" iface="eth0" netns="/var/run/netns/cni-b1889908-0485-6d66-5ed0-27af15530dde" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.648 [INFO][4973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.648 [INFO][4973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.684 [INFO][5005] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" HandleID="k8s-pod-network.7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.684 [INFO][5005] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.687 [INFO][5005] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.704 [WARNING][5005] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" HandleID="k8s-pod-network.7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.704 [INFO][5005] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" HandleID="k8s-pod-network.7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.707 [INFO][5005] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:13.715664 containerd[1726]: 2026-01-28 01:23:13.712 [INFO][4973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:13.716465 containerd[1726]: time="2026-01-28T01:23:13.716312025Z" level=info msg="TearDown network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\" successfully" Jan 28 01:23:13.716465 containerd[1726]: time="2026-01-28T01:23:13.716341345Z" level=info msg="StopPodSandbox for \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\" returns successfully" Jan 28 01:23:13.728529 containerd[1726]: time="2026-01-28T01:23:13.728052378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lmhb6,Uid:1055b396-3282-41c6-8cd5-0cd8ecaec9e4,Namespace:calico-system,Attempt:1,}" Jan 28 01:23:13.729005 kubelet[3179]: E0128 01:23:13.728867 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:23:13.748606 kubelet[3179]: E0128 01:23:13.748373 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:23:13.903923 systemd-networkd[1339]: cali9dee17032a9: Link UP Jan 28 01:23:13.905803 systemd-networkd[1339]: cali9dee17032a9: Gained carrier Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.796 [INFO][5013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0 csi-node-driver- calico-system f68d28e5-4350-4cc7-aede-a307338915a7 1009 0 2026-01-28 01:22:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-6d8ceced70 csi-node-driver-s8nm6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9dee17032a9 [] [] }} ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Namespace="calico-system" Pod="csi-node-driver-s8nm6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.797 [INFO][5013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Namespace="calico-system" Pod="csi-node-driver-s8nm6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.836 [INFO][5047] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" HandleID="k8s-pod-network.e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.838 [INFO][5047] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" HandleID="k8s-pod-network.e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-6d8ceced70", "pod":"csi-node-driver-s8nm6", "timestamp":"2026-01-28 01:23:13.836720833 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d8ceced70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.838 [INFO][5047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.838 [INFO][5047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.839 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d8ceced70' Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.856 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.862 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.866 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.869 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.873 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.873 [INFO][5047] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.64/26 handle="k8s-pod-network.e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.875 [INFO][5047] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321 Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.881 [INFO][5047] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.64/26 handle="k8s-pod-network.e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.890 [INFO][5047] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.68/26] block=192.168.11.64/26 handle="k8s-pod-network.e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.890 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.68/26] handle="k8s-pod-network.e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.890 [INFO][5047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:13.930132 containerd[1726]: 2026-01-28 01:23:13.890 [INFO][5047] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.68/26] IPv6=[] ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" HandleID="k8s-pod-network.e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.931418 containerd[1726]: 2026-01-28 01:23:13.897 [INFO][5013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Namespace="calico-system" Pod="csi-node-driver-s8nm6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f68d28e5-4350-4cc7-aede-a307338915a7", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"", Pod:"csi-node-driver-s8nm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9dee17032a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:13.931418 containerd[1726]: 2026-01-28 01:23:13.897 [INFO][5013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.68/32] ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Namespace="calico-system" Pod="csi-node-driver-s8nm6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.931418 containerd[1726]: 2026-01-28 01:23:13.897 [INFO][5013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dee17032a9 ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Namespace="calico-system" Pod="csi-node-driver-s8nm6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.931418 containerd[1726]: 2026-01-28 01:23:13.904 [INFO][5013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Namespace="calico-system" Pod="csi-node-driver-s8nm6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.931418 containerd[1726]: 2026-01-28 01:23:13.908 [INFO][5013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Namespace="calico-system" Pod="csi-node-driver-s8nm6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f68d28e5-4350-4cc7-aede-a307338915a7", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321", Pod:"csi-node-driver-s8nm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9dee17032a9", MAC:"72:f6:e7:ae:fa:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:13.931418 containerd[1726]: 2026-01-28 01:23:13.927 [INFO][5013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321" Namespace="calico-system" Pod="csi-node-driver-s8nm6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:13.955958 containerd[1726]: time="2026-01-28T01:23:13.955722321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:23:13.955958 containerd[1726]: time="2026-01-28T01:23:13.955801401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:23:13.955958 containerd[1726]: time="2026-01-28T01:23:13.955816521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:13.956451 containerd[1726]: time="2026-01-28T01:23:13.956182361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:13.982016 systemd[1]: Started cri-containerd-e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321.scope - libcontainer container e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321. Jan 28 01:23:14.014243 systemd-networkd[1339]: calia7f3df16ad7: Link UP Jan 28 01:23:14.017622 systemd-networkd[1339]: calia7f3df16ad7: Gained carrier Jan 28 01:23:14.045513 containerd[1726]: time="2026-01-28T01:23:14.045477667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nm6,Uid:f68d28e5-4350-4cc7-aede-a307338915a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321\"" Jan 28 01:23:14.050368 containerd[1726]: time="2026-01-28T01:23:14.049359225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.837 [INFO][5041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0 goldmane-7c778bb748- calico-system 1055b396-3282-41c6-8cd5-0cd8ecaec9e4 1011 0 2026-01-28 01:22:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-6d8ceced70 goldmane-7c778bb748-lmhb6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia7f3df16ad7 [] [] }} ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Namespace="calico-system" Pod="goldmane-7c778bb748-lmhb6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.837 [INFO][5041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Namespace="calico-system" Pod="goldmane-7c778bb748-lmhb6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.873 [INFO][5058] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" HandleID="k8s-pod-network.1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.874 [INFO][5058] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" HandleID="k8s-pod-network.1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-6d8ceced70", "pod":"goldmane-7c778bb748-lmhb6", "timestamp":"2026-01-28 01:23:13.873402811 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d8ceced70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.874 [INFO][5058] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.891 [INFO][5058] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.891 [INFO][5058] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d8ceced70' Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.961 [INFO][5058] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.967 [INFO][5058] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.973 [INFO][5058] ipam/ipam.go 511: Trying affinity for 192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.978 [INFO][5058] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.981 [INFO][5058] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.981 [INFO][5058] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.64/26 handle="k8s-pod-network.1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.985 [INFO][5058] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:13.992 [INFO][5058] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.64/26 handle="k8s-pod-network.1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:14.002 [INFO][5058] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.69/26] block=192.168.11.64/26 handle="k8s-pod-network.1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:14.002 [INFO][5058] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.69/26] handle="k8s-pod-network.1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:14.002 [INFO][5058] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:14.055729 containerd[1726]: 2026-01-28 01:23:14.002 [INFO][5058] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.69/26] IPv6=[] ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" HandleID="k8s-pod-network.1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:14.056458 containerd[1726]: 2026-01-28 01:23:14.006 [INFO][5041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Namespace="calico-system" Pod="goldmane-7c778bb748-lmhb6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1055b396-3282-41c6-8cd5-0cd8ecaec9e4", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"", Pod:"goldmane-7c778bb748-lmhb6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.11.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia7f3df16ad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:14.056458 containerd[1726]: 2026-01-28 01:23:14.006 [INFO][5041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.69/32] ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Namespace="calico-system" Pod="goldmane-7c778bb748-lmhb6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:14.056458 containerd[1726]: 2026-01-28 01:23:14.007 [INFO][5041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7f3df16ad7 ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Namespace="calico-system" Pod="goldmane-7c778bb748-lmhb6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:14.056458 containerd[1726]: 2026-01-28 01:23:14.017 [INFO][5041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Namespace="calico-system" Pod="goldmane-7c778bb748-lmhb6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:14.056458 containerd[1726]: 2026-01-28 01:23:14.019 [INFO][5041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Namespace="calico-system" Pod="goldmane-7c778bb748-lmhb6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1055b396-3282-41c6-8cd5-0cd8ecaec9e4", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c", Pod:"goldmane-7c778bb748-lmhb6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.11.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia7f3df16ad7", MAC:"ce:aa:47:3f:32:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:14.056458 containerd[1726]: 2026-01-28 01:23:14.052 [INFO][5041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c" Namespace="calico-system" Pod="goldmane-7c778bb748-lmhb6" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:14.091273 containerd[1726]: time="2026-01-28T01:23:14.089538561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:23:14.091273 containerd[1726]: time="2026-01-28T01:23:14.089905121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:23:14.091273 containerd[1726]: time="2026-01-28T01:23:14.089924201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:14.091273 containerd[1726]: time="2026-01-28T01:23:14.090020401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:14.110042 systemd[1]: Started cri-containerd-1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c.scope - libcontainer container 1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c. Jan 28 01:23:14.133083 systemd-networkd[1339]: cali92a2bbd3304: Link UP Jan 28 01:23:14.133268 systemd-networkd[1339]: cali92a2bbd3304: Gained carrier Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:13.843 [INFO][5028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0 calico-apiserver-69c4f6486c- calico-apiserver e1527d25-60e3-4960-9f63-e5d366bf57e5 1010 0 2026-01-28 01:22:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69c4f6486c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-6d8ceced70 calico-apiserver-69c4f6486c-snwn4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92a2bbd3304 [] [] }} ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-snwn4" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:13.843 [INFO][5028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-snwn4" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:13.887 [INFO][5060] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" HandleID="k8s-pod-network.f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:13.887 [INFO][5060] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" HandleID="k8s-pod-network.f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-6d8ceced70", "pod":"calico-apiserver-69c4f6486c-snwn4", "timestamp":"2026-01-28 01:23:13.887529242 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d8ceced70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:13.887 [INFO][5060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.003 [INFO][5060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.003 [INFO][5060] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d8ceced70' Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.058 [INFO][5060] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.068 [INFO][5060] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.077 [INFO][5060] ipam/ipam.go 511: Trying affinity for 192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.080 [INFO][5060] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.088 [INFO][5060] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.089 [INFO][5060] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.64/26 handle="k8s-pod-network.f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.092 [INFO][5060] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35 Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.105 [INFO][5060] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.64/26 handle="k8s-pod-network.f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.116 [INFO][5060] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.70/26] block=192.168.11.64/26 handle="k8s-pod-network.f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.116 [INFO][5060] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.70/26] handle="k8s-pod-network.f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.116 [INFO][5060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:14.157625 containerd[1726]: 2026-01-28 01:23:14.116 [INFO][5060] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.70/26] IPv6=[] ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" HandleID="k8s-pod-network.f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:14.158749 containerd[1726]: 2026-01-28 01:23:14.129 [INFO][5028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-snwn4" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0", GenerateName:"calico-apiserver-69c4f6486c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1527d25-60e3-4960-9f63-e5d366bf57e5", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4f6486c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"", Pod:"calico-apiserver-69c4f6486c-snwn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92a2bbd3304", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:14.158749 containerd[1726]: 2026-01-28 01:23:14.130 [INFO][5028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.70/32] ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-snwn4" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:14.158749 containerd[1726]: 2026-01-28 01:23:14.130 [INFO][5028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92a2bbd3304 ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-snwn4" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:14.158749 containerd[1726]: 2026-01-28 01:23:14.133 [INFO][5028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-snwn4" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:14.158749 containerd[1726]: 2026-01-28 01:23:14.133 [INFO][5028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-snwn4" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0", GenerateName:"calico-apiserver-69c4f6486c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1527d25-60e3-4960-9f63-e5d366bf57e5", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4f6486c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35", Pod:"calico-apiserver-69c4f6486c-snwn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92a2bbd3304", MAC:"0e:f4:3a:ed:a8:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:14.158749 containerd[1726]: 2026-01-28 01:23:14.154 [INFO][5028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35" Namespace="calico-apiserver" Pod="calico-apiserver-69c4f6486c-snwn4" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:14.164996 systemd-networkd[1339]: caliec74292f2be: Gained IPv6LL Jan 28 01:23:14.181781 containerd[1726]: time="2026-01-28T01:23:14.181559466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:23:14.181781 containerd[1726]: time="2026-01-28T01:23:14.181668666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:23:14.182273 containerd[1726]: time="2026-01-28T01:23:14.182106745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:14.182852 containerd[1726]: time="2026-01-28T01:23:14.182775545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:14.205915 containerd[1726]: time="2026-01-28T01:23:14.205880451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lmhb6,Uid:1055b396-3282-41c6-8cd5-0cd8ecaec9e4,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c\"" Jan 28 01:23:14.210022 systemd[1]: Started cri-containerd-f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35.scope - libcontainer container f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35. Jan 28 01:23:14.242870 containerd[1726]: time="2026-01-28T01:23:14.242682029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4f6486c-snwn4,Uid:e1527d25-60e3-4960-9f63-e5d366bf57e5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35\"" Jan 28 01:23:14.292005 systemd-networkd[1339]: calid58775f0319: Gained IPv6LL Jan 28 01:23:14.333345 containerd[1726]: time="2026-01-28T01:23:14.333060095Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:14.336374 containerd[1726]: time="2026-01-28T01:23:14.336267053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:23:14.336374 containerd[1726]: time="2026-01-28T01:23:14.336342013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:23:14.336542 kubelet[3179]: E0128 01:23:14.336500 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:23:14.337269 kubelet[3179]: E0128 01:23:14.336544 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:23:14.337304 containerd[1726]: time="2026-01-28T01:23:14.336813732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:23:14.338410 kubelet[3179]: E0128 01:23:14.337391 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:14.520371 containerd[1726]: time="2026-01-28T01:23:14.519223583Z" level=info msg="StopPodSandbox for \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\"" Jan 28 01:23:14.593544 containerd[1726]: time="2026-01-28T01:23:14.591254900Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:14.594193 containerd[1726]: time="2026-01-28T01:23:14.594149178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:23:14.594355 containerd[1726]: time="2026-01-28T01:23:14.594258378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:23:14.594436 kubelet[3179]: E0128 01:23:14.594398 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:23:14.594559 kubelet[3179]: E0128 01:23:14.594463 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:23:14.594920 kubelet[3179]: E0128 01:23:14.594642 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lmhb6_calico-system(1055b396-3282-41c6-8cd5-0cd8ecaec9e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:14.594971 kubelet[3179]: E0128 01:23:14.594933 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:23:14.595310 containerd[1726]: time="2026-01-28T01:23:14.595277617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.572 [INFO][5231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.572 [INFO][5231] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" iface="eth0" netns="/var/run/netns/cni-48d10954-66d1-fa5c-40d5-bd5ecdc7865a" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.573 [INFO][5231] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" iface="eth0" netns="/var/run/netns/cni-48d10954-66d1-fa5c-40d5-bd5ecdc7865a" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.573 [INFO][5231] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" iface="eth0" netns="/var/run/netns/cni-48d10954-66d1-fa5c-40d5-bd5ecdc7865a" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.573 [INFO][5231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.573 [INFO][5231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.590 [INFO][5239] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" HandleID="k8s-pod-network.4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.591 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.591 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.605 [WARNING][5239] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" HandleID="k8s-pod-network.4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.605 [INFO][5239] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" HandleID="k8s-pod-network.4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.608 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:14.612505 containerd[1726]: 2026-01-28 01:23:14.610 [INFO][5231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:14.612995 containerd[1726]: time="2026-01-28T01:23:14.612631447Z" level=info msg="TearDown network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\" successfully" Jan 28 01:23:14.612995 containerd[1726]: time="2026-01-28T01:23:14.612656127Z" level=info msg="StopPodSandbox for \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\" returns successfully" Jan 28 01:23:14.624036 containerd[1726]: time="2026-01-28T01:23:14.624001480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lbbmn,Uid:b93794f0-c760-43f8-9817-c3814f113c55,Namespace:kube-system,Attempt:1,}" Jan 28 01:23:14.629472 systemd[1]: run-netns-cni\x2db1889908\x2d0485\x2d6d66\x2d5ed0\x2d27af15530dde.mount: Deactivated successfully. Jan 28 01:23:14.629611 systemd[1]: run-netns-cni\x2d48d10954\x2d66d1\x2dfa5c\x2d40d5\x2dbd5ecdc7865a.mount: Deactivated successfully. Jan 28 01:23:14.749990 kubelet[3179]: E0128 01:23:14.749954 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:23:14.757620 systemd-networkd[1339]: cali8dddf1d8203: Link UP Jan 28 01:23:14.757826 systemd-networkd[1339]: cali8dddf1d8203: Gained carrier Jan 28 01:23:14.759929 kubelet[3179]: E0128 01:23:14.759895 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:23:14.762243 kubelet[3179]: E0128 01:23:14.762214 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.683 [INFO][5246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0 coredns-66bc5c9577- kube-system b93794f0-c760-43f8-9817-c3814f113c55 1037 0 2026-01-28 01:22:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-6d8ceced70 coredns-66bc5c9577-lbbmn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8dddf1d8203 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Namespace="kube-system" Pod="coredns-66bc5c9577-lbbmn" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.683 [INFO][5246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Namespace="kube-system" Pod="coredns-66bc5c9577-lbbmn" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.705 [INFO][5257] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" HandleID="k8s-pod-network.7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.705 [INFO][5257] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" HandleID="k8s-pod-network.7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b220), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-6d8ceced70", "pod":"coredns-66bc5c9577-lbbmn", "timestamp":"2026-01-28 01:23:14.705204951 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d8ceced70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.705 [INFO][5257] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.705 [INFO][5257] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.705 [INFO][5257] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d8ceced70' Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.714 [INFO][5257] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.719 [INFO][5257] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.723 [INFO][5257] ipam/ipam.go 511: Trying affinity for 192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.724 [INFO][5257] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.726 [INFO][5257] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.726 [INFO][5257] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.64/26 handle="k8s-pod-network.7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.728 [INFO][5257] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.732 [INFO][5257] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.64/26 handle="k8s-pod-network.7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.744 [INFO][5257] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.71/26] block=192.168.11.64/26 handle="k8s-pod-network.7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.744 [INFO][5257] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.71/26] handle="k8s-pod-network.7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.744 [INFO][5257] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:14.786077 containerd[1726]: 2026-01-28 01:23:14.744 [INFO][5257] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.71/26] IPv6=[] ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" HandleID="k8s-pod-network.7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.786628 containerd[1726]: 2026-01-28 01:23:14.750 [INFO][5246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Namespace="kube-system" Pod="coredns-66bc5c9577-lbbmn" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b93794f0-c760-43f8-9817-c3814f113c55", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"", Pod:"coredns-66bc5c9577-lbbmn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8dddf1d8203", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:14.786628 containerd[1726]: 2026-01-28 01:23:14.752 [INFO][5246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.71/32] ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Namespace="kube-system" Pod="coredns-66bc5c9577-lbbmn" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.786628 containerd[1726]: 2026-01-28 01:23:14.752 [INFO][5246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8dddf1d8203 ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Namespace="kube-system" Pod="coredns-66bc5c9577-lbbmn" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.786628 containerd[1726]: 2026-01-28 01:23:14.756 [INFO][5246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Namespace="kube-system" Pod="coredns-66bc5c9577-lbbmn" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.786628 containerd[1726]: 2026-01-28 01:23:14.759 [INFO][5246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Namespace="kube-system" Pod="coredns-66bc5c9577-lbbmn" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b93794f0-c760-43f8-9817-c3814f113c55", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac", Pod:"coredns-66bc5c9577-lbbmn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8dddf1d8203", MAC:"5e:ed:32:80:94:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:14.786812 containerd[1726]: 2026-01-28 01:23:14.780 [INFO][5246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac" Namespace="kube-system" Pod="coredns-66bc5c9577-lbbmn" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:14.830751 containerd[1726]: time="2026-01-28T01:23:14.830647836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:23:14.830751 containerd[1726]: time="2026-01-28T01:23:14.830715996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:23:14.830751 containerd[1726]: time="2026-01-28T01:23:14.830731076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:14.830968 containerd[1726]: time="2026-01-28T01:23:14.830813356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:14.853956 containerd[1726]: time="2026-01-28T01:23:14.852993262Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:14.858455 containerd[1726]: time="2026-01-28T01:23:14.858398099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:23:14.858736 containerd[1726]: time="2026-01-28T01:23:14.858708019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:23:14.860338 kubelet[3179]: E0128 01:23:14.859209 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:14.860338 kubelet[3179]: E0128 01:23:14.859255 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:14.860338 kubelet[3179]: E0128 01:23:14.859418 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69c4f6486c-snwn4_calico-apiserver(e1527d25-60e3-4960-9f63-e5d366bf57e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:14.860338 kubelet[3179]: E0128 01:23:14.859451 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:23:14.860553 containerd[1726]: time="2026-01-28T01:23:14.859625338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:23:14.863629 systemd[1]: Started cri-containerd-7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac.scope - libcontainer container 7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac. Jan 28 01:23:14.901901 containerd[1726]: time="2026-01-28T01:23:14.900960034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lbbmn,Uid:b93794f0-c760-43f8-9817-c3814f113c55,Namespace:kube-system,Attempt:1,} returns sandbox id \"7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac\"" Jan 28 01:23:14.912417 containerd[1726]: time="2026-01-28T01:23:14.912284547Z" level=info msg="CreateContainer within sandbox \"7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:23:14.947697 containerd[1726]: time="2026-01-28T01:23:14.947652326Z" level=info msg="CreateContainer within sandbox \"7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f25d1f1199ef7ba36750336911d524014f45ef8fc744c6023a937228d5a22b4f\"" Jan 28 01:23:14.949114 containerd[1726]: time="2026-01-28T01:23:14.948805765Z" level=info msg="StartContainer for \"f25d1f1199ef7ba36750336911d524014f45ef8fc744c6023a937228d5a22b4f\"" Jan 28 01:23:14.976104 systemd[1]: Started cri-containerd-f25d1f1199ef7ba36750336911d524014f45ef8fc744c6023a937228d5a22b4f.scope - libcontainer container f25d1f1199ef7ba36750336911d524014f45ef8fc744c6023a937228d5a22b4f. Jan 28 01:23:15.003474 containerd[1726]: time="2026-01-28T01:23:15.003427012Z" level=info msg="StartContainer for \"f25d1f1199ef7ba36750336911d524014f45ef8fc744c6023a937228d5a22b4f\" returns successfully" Jan 28 01:23:15.163788 containerd[1726]: time="2026-01-28T01:23:15.163658876Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:15.166363 containerd[1726]: time="2026-01-28T01:23:15.166313514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:23:15.166501 containerd[1726]: time="2026-01-28T01:23:15.166417994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:23:15.166603 kubelet[3179]: E0128 01:23:15.166561 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:23:15.166652 kubelet[3179]: E0128 01:23:15.166611 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:23:15.166705 kubelet[3179]: E0128 01:23:15.166684 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:15.166769 kubelet[3179]: E0128 01:23:15.166727 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:23:15.508065 systemd-networkd[1339]: cali9dee17032a9: Gained IPv6LL Jan 28 01:23:15.764368 kubelet[3179]: E0128 01:23:15.762786 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:23:15.764368 kubelet[3179]: E0128 01:23:15.763907 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:23:15.764749 kubelet[3179]: E0128 01:23:15.764283 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:23:15.765044 systemd-networkd[1339]: calia7f3df16ad7: Gained IPv6LL Jan 28 01:23:15.818762 kubelet[3179]: I0128 01:23:15.818157 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lbbmn" podStartSLOduration=49.818139292 podStartE2EDuration="49.818139292s" podCreationTimestamp="2026-01-28 01:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:23:15.800657261 +0000 UTC m=+57.395900369" watchObservedRunningTime="2026-01-28 01:23:15.818139292 +0000 UTC m=+57.413382400" Jan 28 01:23:15.955997 systemd-networkd[1339]: cali92a2bbd3304: Gained IPv6LL Jan 28 01:23:16.468023 systemd-networkd[1339]: cali8dddf1d8203: Gained IPv6LL Jan 28 01:23:16.518714 containerd[1726]: time="2026-01-28T01:23:16.518415399Z" level=info msg="StopPodSandbox for \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\"" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.566 [INFO][5370] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.567 [INFO][5370] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" iface="eth0" netns="/var/run/netns/cni-12788964-8d2b-5050-4a68-d7ea7905acfb" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.567 [INFO][5370] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" iface="eth0" netns="/var/run/netns/cni-12788964-8d2b-5050-4a68-d7ea7905acfb" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.567 [INFO][5370] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" iface="eth0" netns="/var/run/netns/cni-12788964-8d2b-5050-4a68-d7ea7905acfb" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.567 [INFO][5370] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.567 [INFO][5370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.585 [INFO][5378] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" HandleID="k8s-pod-network.560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.585 [INFO][5378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.585 [INFO][5378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.593 [WARNING][5378] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" HandleID="k8s-pod-network.560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.593 [INFO][5378] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" HandleID="k8s-pod-network.560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.595 [INFO][5378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:16.600873 containerd[1726]: 2026-01-28 01:23:16.597 [INFO][5370] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:16.602104 systemd[1]: run-netns-cni\x2d12788964\x2d8d2b\x2d5050\x2d4a68\x2dd7ea7905acfb.mount: Deactivated successfully. Jan 28 01:23:16.602625 containerd[1726]: time="2026-01-28T01:23:16.600810115Z" level=info msg="TearDown network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\" successfully" Jan 28 01:23:16.602625 containerd[1726]: time="2026-01-28T01:23:16.602193475Z" level=info msg="StopPodSandbox for \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\" returns successfully" Jan 28 01:23:16.607816 containerd[1726]: time="2026-01-28T01:23:16.607782192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qhjz2,Uid:fc15c614-7b8d-4699-bd70-980cb39baa43,Namespace:kube-system,Attempt:1,}" Jan 28 01:23:16.741592 systemd-networkd[1339]: cali1a95f72d977: Link UP Jan 28 01:23:16.742483 systemd-networkd[1339]: cali1a95f72d977: Gained carrier Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.673 [INFO][5385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0 coredns-66bc5c9577- kube-system fc15c614-7b8d-4699-bd70-980cb39baa43 1081 0 2026-01-28 01:22:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-6d8ceced70 coredns-66bc5c9577-qhjz2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1a95f72d977 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Namespace="kube-system" Pod="coredns-66bc5c9577-qhjz2" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.673 [INFO][5385] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Namespace="kube-system" Pod="coredns-66bc5c9577-qhjz2" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.695 [INFO][5396] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" HandleID="k8s-pod-network.cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.695 [INFO][5396] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" HandleID="k8s-pod-network.cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-6d8ceced70", "pod":"coredns-66bc5c9577-qhjz2", "timestamp":"2026-01-28 01:23:16.695526425 +0000 UTC"}, Hostname:"ci-4081.3.6-n-6d8ceced70", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.695 [INFO][5396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.695 [INFO][5396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.695 [INFO][5396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-6d8ceced70' Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.704 [INFO][5396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.708 [INFO][5396] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.713 [INFO][5396] ipam/ipam.go 511: Trying affinity for 192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.715 [INFO][5396] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.717 [INFO][5396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.64/26 host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.717 [INFO][5396] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.64/26 handle="k8s-pod-network.cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.718 [INFO][5396] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2 Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.726 [INFO][5396] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.64/26 handle="k8s-pod-network.cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.734 [INFO][5396] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.72/26] block=192.168.11.64/26 handle="k8s-pod-network.cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.734 [INFO][5396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.72/26] handle="k8s-pod-network.cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" host="ci-4081.3.6-n-6d8ceced70" Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.734 [INFO][5396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:16.760844 containerd[1726]: 2026-01-28 01:23:16.735 [INFO][5396] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.72/26] IPv6=[] ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" HandleID="k8s-pod-network.cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.762625 containerd[1726]: 2026-01-28 01:23:16.737 [INFO][5385] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Namespace="kube-system" Pod="coredns-66bc5c9577-qhjz2" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fc15c614-7b8d-4699-bd70-980cb39baa43", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"", Pod:"coredns-66bc5c9577-qhjz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a95f72d977", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:16.762625 containerd[1726]: 2026-01-28 01:23:16.737 [INFO][5385] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.72/32] ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Namespace="kube-system" Pod="coredns-66bc5c9577-qhjz2" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.762625 containerd[1726]: 2026-01-28 01:23:16.737 [INFO][5385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a95f72d977 ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Namespace="kube-system" Pod="coredns-66bc5c9577-qhjz2" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.762625 containerd[1726]: 2026-01-28 01:23:16.743 [INFO][5385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Namespace="kube-system" Pod="coredns-66bc5c9577-qhjz2" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.762625 containerd[1726]: 2026-01-28 01:23:16.743 [INFO][5385] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Namespace="kube-system" Pod="coredns-66bc5c9577-qhjz2" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fc15c614-7b8d-4699-bd70-980cb39baa43", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2", Pod:"coredns-66bc5c9577-qhjz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a95f72d977", MAC:"ca:cd:3b:a6:5d:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:16.762824 containerd[1726]: 2026-01-28 01:23:16.758 [INFO][5385] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2" Namespace="kube-system" Pod="coredns-66bc5c9577-qhjz2" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:16.786132 containerd[1726]: time="2026-01-28T01:23:16.785670977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:23:16.786132 containerd[1726]: time="2026-01-28T01:23:16.785728497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:23:16.786132 containerd[1726]: time="2026-01-28T01:23:16.785744177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:16.787180 containerd[1726]: time="2026-01-28T01:23:16.786743496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:23:16.816117 systemd[1]: Started cri-containerd-cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2.scope - libcontainer container cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2. Jan 28 01:23:16.855479 containerd[1726]: time="2026-01-28T01:23:16.855372460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qhjz2,Uid:fc15c614-7b8d-4699-bd70-980cb39baa43,Namespace:kube-system,Attempt:1,} returns sandbox id \"cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2\"" Jan 28 01:23:16.864304 containerd[1726]: time="2026-01-28T01:23:16.864265335Z" level=info msg="CreateContainer within sandbox \"cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:23:16.897036 containerd[1726]: time="2026-01-28T01:23:16.896996398Z" level=info msg="CreateContainer within sandbox \"cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14569cf5d4bc516663a257a38b7d936afba3fcf853e79d05c410e0e0c36e133a\"" Jan 28 01:23:16.898479 containerd[1726]: time="2026-01-28T01:23:16.897621917Z" level=info msg="StartContainer for \"14569cf5d4bc516663a257a38b7d936afba3fcf853e79d05c410e0e0c36e133a\"" Jan 28 01:23:16.922009 systemd[1]: Started cri-containerd-14569cf5d4bc516663a257a38b7d936afba3fcf853e79d05c410e0e0c36e133a.scope - libcontainer container 14569cf5d4bc516663a257a38b7d936afba3fcf853e79d05c410e0e0c36e133a. Jan 28 01:23:16.948910 containerd[1726]: time="2026-01-28T01:23:16.948865970Z" level=info msg="StartContainer for \"14569cf5d4bc516663a257a38b7d936afba3fcf853e79d05c410e0e0c36e133a\" returns successfully" Jan 28 01:23:17.782068 kubelet[3179]: I0128 01:23:17.781994 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qhjz2" podStartSLOduration=51.781977927 podStartE2EDuration="51.781977927s" podCreationTimestamp="2026-01-28 01:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:23:17.781412607 +0000 UTC m=+59.376655715" watchObservedRunningTime="2026-01-28 01:23:17.781977927 +0000 UTC m=+59.377220995" Jan 28 01:23:18.451939 systemd-networkd[1339]: cali1a95f72d977: Gained IPv6LL Jan 28 01:23:18.507603 containerd[1726]: time="2026-01-28T01:23:18.507252821Z" level=info msg="StopPodSandbox for \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\"" Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.553 [WARNING][5501] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0", GenerateName:"calico-apiserver-69c4f6486c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e15a9a69-173f-490e-af7a-8a44d37eda4d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4f6486c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2", Pod:"calico-apiserver-69c4f6486c-pzztc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58775f0319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.554 [INFO][5501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.554 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" iface="eth0" netns="" Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.554 [INFO][5501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.554 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.587 [INFO][5510] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" HandleID="k8s-pod-network.6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.588 [INFO][5510] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.588 [INFO][5510] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.597 [WARNING][5510] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" HandleID="k8s-pod-network.6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.597 [INFO][5510] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" HandleID="k8s-pod-network.6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.600 [INFO][5510] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:18.604306 containerd[1726]: 2026-01-28 01:23:18.602 [INFO][5501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:18.604306 containerd[1726]: time="2026-01-28T01:23:18.604189610Z" level=info msg="TearDown network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\" successfully" Jan 28 01:23:18.604306 containerd[1726]: time="2026-01-28T01:23:18.604215530Z" level=info msg="StopPodSandbox for \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\" returns successfully" Jan 28 01:23:18.605697 containerd[1726]: time="2026-01-28T01:23:18.605243289Z" level=info msg="RemovePodSandbox for \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\"" Jan 28 01:23:18.610370 containerd[1726]: time="2026-01-28T01:23:18.610257607Z" level=info msg="Forcibly stopping sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\"" Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.647 [WARNING][5525] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0", GenerateName:"calico-apiserver-69c4f6486c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e15a9a69-173f-490e-af7a-8a44d37eda4d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4f6486c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"e29f5a209b8788917cbd994dbacae65d47838df6d4830766354574c3da1900b2", Pod:"calico-apiserver-69c4f6486c-pzztc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid58775f0319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.648 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.648 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" iface="eth0" netns="" Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.648 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.648 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.665 [INFO][5532] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" HandleID="k8s-pod-network.6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.665 [INFO][5532] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.666 [INFO][5532] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.674 [WARNING][5532] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" HandleID="k8s-pod-network.6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.674 [INFO][5532] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" HandleID="k8s-pod-network.6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--pzztc-eth0" Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.675 [INFO][5532] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:18.678642 containerd[1726]: 2026-01-28 01:23:18.677 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837" Jan 28 01:23:18.679181 containerd[1726]: time="2026-01-28T01:23:18.678686970Z" level=info msg="TearDown network for sandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\" successfully" Jan 28 01:23:18.689414 containerd[1726]: time="2026-01-28T01:23:18.689353245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:23:18.689544 containerd[1726]: time="2026-01-28T01:23:18.689431485Z" level=info msg="RemovePodSandbox \"6b60213b4c21f247aa9b9aebb15f384c79d6e44b3586ad260dc4ef0d0c75b837\" returns successfully" Jan 28 01:23:18.690917 containerd[1726]: time="2026-01-28T01:23:18.690624524Z" level=info msg="StopPodSandbox for \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\"" Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.732 [WARNING][5547] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0", GenerateName:"calico-kube-controllers-56cc7cdcfb-", Namespace:"calico-system", SelfLink:"", UID:"7ad0c2f8-bb34-49c9-a1bb-d618f47675e5", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cc7cdcfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102", Pod:"calico-kube-controllers-56cc7cdcfb-z7vlh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliec74292f2be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.732 [INFO][5547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.732 [INFO][5547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" iface="eth0" netns="" Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.732 [INFO][5547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.732 [INFO][5547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.753 [INFO][5555] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" HandleID="k8s-pod-network.d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.753 [INFO][5555] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.753 [INFO][5555] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.762 [WARNING][5555] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" HandleID="k8s-pod-network.d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.762 [INFO][5555] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" HandleID="k8s-pod-network.d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.763 [INFO][5555] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:18.767602 containerd[1726]: 2026-01-28 01:23:18.765 [INFO][5547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:18.768191 containerd[1726]: time="2026-01-28T01:23:18.767649963Z" level=info msg="TearDown network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\" successfully" Jan 28 01:23:18.768191 containerd[1726]: time="2026-01-28T01:23:18.767677003Z" level=info msg="StopPodSandbox for \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\" returns successfully" Jan 28 01:23:18.769218 containerd[1726]: time="2026-01-28T01:23:18.768920242Z" level=info msg="RemovePodSandbox for \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\"" Jan 28 01:23:18.769218 containerd[1726]: time="2026-01-28T01:23:18.768953362Z" level=info msg="Forcibly stopping sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\"" Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.819 [WARNING][5569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0", GenerateName:"calico-kube-controllers-56cc7cdcfb-", Namespace:"calico-system", SelfLink:"", UID:"7ad0c2f8-bb34-49c9-a1bb-d618f47675e5", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56cc7cdcfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"d910359a7f3f64d88d344f646c9092e49572b0b229c449b38f0fa7cf6b4d3102", Pod:"calico-kube-controllers-56cc7cdcfb-z7vlh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliec74292f2be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.819 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.819 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" iface="eth0" netns="" Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.819 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.819 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.841 [INFO][5576] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" HandleID="k8s-pod-network.d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.841 [INFO][5576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.841 [INFO][5576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.850 [WARNING][5576] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" HandleID="k8s-pod-network.d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.851 [INFO][5576] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" HandleID="k8s-pod-network.d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--kube--controllers--56cc7cdcfb--z7vlh-eth0" Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.852 [INFO][5576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:18.855859 containerd[1726]: 2026-01-28 01:23:18.854 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc" Jan 28 01:23:18.856657 containerd[1726]: time="2026-01-28T01:23:18.856335796Z" level=info msg="TearDown network for sandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\" successfully" Jan 28 01:23:18.863312 containerd[1726]: time="2026-01-28T01:23:18.863151912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:23:18.863312 containerd[1726]: time="2026-01-28T01:23:18.863215592Z" level=info msg="RemovePodSandbox \"d63a466f16f3265af1d13d7f5c66af9ca01345f99e52840562449da181e875dc\" returns successfully" Jan 28 01:23:18.863660 containerd[1726]: time="2026-01-28T01:23:18.863636072Z" level=info msg="StopPodSandbox for \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\"" Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.898 [WARNING][5590] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f68d28e5-4350-4cc7-aede-a307338915a7", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321", Pod:"csi-node-driver-s8nm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9dee17032a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.898 [INFO][5590] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.898 [INFO][5590] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" iface="eth0" netns="" Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.898 [INFO][5590] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.898 [INFO][5590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.917 [INFO][5597] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" HandleID="k8s-pod-network.6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.917 [INFO][5597] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.917 [INFO][5597] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.926 [WARNING][5597] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" HandleID="k8s-pod-network.6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.926 [INFO][5597] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" HandleID="k8s-pod-network.6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.927 [INFO][5597] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:18.931006 containerd[1726]: 2026-01-28 01:23:18.929 [INFO][5590] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:18.932034 containerd[1726]: time="2026-01-28T01:23:18.931043196Z" level=info msg="TearDown network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\" successfully" Jan 28 01:23:18.932034 containerd[1726]: time="2026-01-28T01:23:18.931087596Z" level=info msg="StopPodSandbox for \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\" returns successfully" Jan 28 01:23:18.932512 containerd[1726]: time="2026-01-28T01:23:18.932152155Z" level=info msg="RemovePodSandbox for \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\"" Jan 28 01:23:18.932512 containerd[1726]: time="2026-01-28T01:23:18.932194675Z" level=info msg="Forcibly stopping sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\"" Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:18.975 [WARNING][5611] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f68d28e5-4350-4cc7-aede-a307338915a7", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"e8314c04311bdc3f49ec5cd57645d6efa0ce4b070959cae213ab6845cabcb321", Pod:"csi-node-driver-s8nm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9dee17032a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:18.975 [INFO][5611] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:18.975 [INFO][5611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" iface="eth0" netns="" Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:18.975 [INFO][5611] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:18.975 [INFO][5611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:19.003 [INFO][5622] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" HandleID="k8s-pod-network.6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:19.003 [INFO][5622] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:19.003 [INFO][5622] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:19.017 [WARNING][5622] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" HandleID="k8s-pod-network.6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:19.018 [INFO][5622] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" HandleID="k8s-pod-network.6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-csi--node--driver--s8nm6-eth0" Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:19.020 [INFO][5622] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.025959 containerd[1726]: 2026-01-28 01:23:19.022 [INFO][5611] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f" Jan 28 01:23:19.025959 containerd[1726]: time="2026-01-28T01:23:19.025254346Z" level=info msg="TearDown network for sandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\" successfully" Jan 28 01:23:19.032971 containerd[1726]: time="2026-01-28T01:23:19.032529022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:23:19.033230 containerd[1726]: time="2026-01-28T01:23:19.033208062Z" level=info msg="RemovePodSandbox \"6d0b84efca9357cb81919957cf280f3b4598cbc8886670d11d7d7c4ccbd7df7f\" returns successfully" Jan 28 01:23:19.034118 containerd[1726]: time="2026-01-28T01:23:19.034088381Z" level=info msg="StopPodSandbox for \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\"" Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.071 [WARNING][5636] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.071 [INFO][5636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.071 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" iface="eth0" netns="" Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.071 [INFO][5636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.071 [INFO][5636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.089 [INFO][5643] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" HandleID="k8s-pod-network.24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.089 [INFO][5643] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.089 [INFO][5643] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.099 [WARNING][5643] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" HandleID="k8s-pod-network.24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.099 [INFO][5643] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" HandleID="k8s-pod-network.24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.101 [INFO][5643] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.107498 containerd[1726]: 2026-01-28 01:23:19.105 [INFO][5636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:19.109006 containerd[1726]: time="2026-01-28T01:23:19.107673742Z" level=info msg="TearDown network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\" successfully" Jan 28 01:23:19.109006 containerd[1726]: time="2026-01-28T01:23:19.108060382Z" level=info msg="StopPodSandbox for \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\" returns successfully" Jan 28 01:23:19.110044 containerd[1726]: time="2026-01-28T01:23:19.110003781Z" level=info msg="RemovePodSandbox for \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\"" Jan 28 01:23:19.110516 containerd[1726]: time="2026-01-28T01:23:19.110217541Z" level=info msg="Forcibly stopping sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\"" Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.143 [WARNING][5657] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" WorkloadEndpoint="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.144 [INFO][5657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.144 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" iface="eth0" netns="" Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.144 [INFO][5657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.144 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.163 [INFO][5664] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" HandleID="k8s-pod-network.24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.163 [INFO][5664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.163 [INFO][5664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.172 [WARNING][5664] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" HandleID="k8s-pod-network.24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.172 [INFO][5664] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" HandleID="k8s-pod-network.24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Workload="ci--4081.3.6--n--6d8ceced70-k8s-whisker--64b8f9cd5f--h7lns-eth0" Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.174 [INFO][5664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.177872 containerd[1726]: 2026-01-28 01:23:19.176 [INFO][5657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0" Jan 28 01:23:19.178365 containerd[1726]: time="2026-01-28T01:23:19.177932505Z" level=info msg="TearDown network for sandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\" successfully" Jan 28 01:23:19.188711 containerd[1726]: time="2026-01-28T01:23:19.188647819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:23:19.188711 containerd[1726]: time="2026-01-28T01:23:19.188716699Z" level=info msg="RemovePodSandbox \"24e3e54b6a7cfd9bff85ce60221e7a638fd92436578250242fa15a6166b537b0\" returns successfully" Jan 28 01:23:19.189301 containerd[1726]: time="2026-01-28T01:23:19.189240459Z" level=info msg="StopPodSandbox for \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\"" Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.222 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fc15c614-7b8d-4699-bd70-980cb39baa43", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2", Pod:"coredns-66bc5c9577-qhjz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a95f72d977", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.222 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.222 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" iface="eth0" netns="" Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.222 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.222 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.243 [INFO][5685] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" HandleID="k8s-pod-network.560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.243 [INFO][5685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.243 [INFO][5685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.253 [WARNING][5685] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" HandleID="k8s-pod-network.560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.253 [INFO][5685] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" HandleID="k8s-pod-network.560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.254 [INFO][5685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.260050 containerd[1726]: 2026-01-28 01:23:19.256 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:19.260966 containerd[1726]: time="2026-01-28T01:23:19.260516861Z" level=info msg="TearDown network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\" successfully" Jan 28 01:23:19.260966 containerd[1726]: time="2026-01-28T01:23:19.260548261Z" level=info msg="StopPodSandbox for \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\" returns successfully" Jan 28 01:23:19.261365 containerd[1726]: time="2026-01-28T01:23:19.261337460Z" level=info msg="RemovePodSandbox for \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\"" Jan 28 01:23:19.261411 containerd[1726]: time="2026-01-28T01:23:19.261370620Z" level=info msg="Forcibly stopping sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\"" Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.297 [WARNING][5699] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fc15c614-7b8d-4699-bd70-980cb39baa43", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"cf07651e5c83478b069bf902f98f203f20a6418ae014be169e10b17dd98afcc2", Pod:"coredns-66bc5c9577-qhjz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a95f72d977", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.298 [INFO][5699] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.298 [INFO][5699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" iface="eth0" netns="" Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.298 [INFO][5699] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.298 [INFO][5699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.320 [INFO][5706] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" HandleID="k8s-pod-network.560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.321 [INFO][5706] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.321 [INFO][5706] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.330 [WARNING][5706] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" HandleID="k8s-pod-network.560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.330 [INFO][5706] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" HandleID="k8s-pod-network.560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--qhjz2-eth0" Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.332 [INFO][5706] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.337084 containerd[1726]: 2026-01-28 01:23:19.333 [INFO][5699] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25" Jan 28 01:23:19.337084 containerd[1726]: time="2026-01-28T01:23:19.335801661Z" level=info msg="TearDown network for sandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\" successfully" Jan 28 01:23:19.342621 containerd[1726]: time="2026-01-28T01:23:19.342467137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:23:19.342621 containerd[1726]: time="2026-01-28T01:23:19.342535017Z" level=info msg="RemovePodSandbox \"560a927f883c6904b72a801e7bc3cb7cf12f118ff56d2c1c41b4e050f8049d25\" returns successfully" Jan 28 01:23:19.343075 containerd[1726]: time="2026-01-28T01:23:19.343027617Z" level=info msg="StopPodSandbox for \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\"" Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.381 [WARNING][5720] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1055b396-3282-41c6-8cd5-0cd8ecaec9e4", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c", Pod:"goldmane-7c778bb748-lmhb6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.11.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia7f3df16ad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.381 [INFO][5720] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.381 [INFO][5720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" iface="eth0" netns="" Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.381 [INFO][5720] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.381 [INFO][5720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.402 [INFO][5727] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" HandleID="k8s-pod-network.7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.403 [INFO][5727] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.403 [INFO][5727] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.416 [WARNING][5727] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" HandleID="k8s-pod-network.7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.416 [INFO][5727] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" HandleID="k8s-pod-network.7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.417 [INFO][5727] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.422128 containerd[1726]: 2026-01-28 01:23:19.419 [INFO][5720] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:19.422567 containerd[1726]: time="2026-01-28T01:23:19.422211335Z" level=info msg="TearDown network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\" successfully" Jan 28 01:23:19.422567 containerd[1726]: time="2026-01-28T01:23:19.422260055Z" level=info msg="StopPodSandbox for \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\" returns successfully" Jan 28 01:23:19.423123 containerd[1726]: time="2026-01-28T01:23:19.423072894Z" level=info msg="RemovePodSandbox for \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\"" Jan 28 01:23:19.423123 containerd[1726]: time="2026-01-28T01:23:19.423124454Z" level=info msg="Forcibly stopping sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\"" Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.456 [WARNING][5741] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1055b396-3282-41c6-8cd5-0cd8ecaec9e4", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"1b7dd2c73c1948e5dfc0809d62df620c21e5146b5609bdc85f260e0730d5e75c", Pod:"goldmane-7c778bb748-lmhb6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.11.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia7f3df16ad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.456 [INFO][5741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.456 [INFO][5741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" iface="eth0" netns="" Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.456 [INFO][5741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.456 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.473 [INFO][5748] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" HandleID="k8s-pod-network.7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.473 [INFO][5748] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.473 [INFO][5748] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.481 [WARNING][5748] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" HandleID="k8s-pod-network.7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.481 [INFO][5748] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" HandleID="k8s-pod-network.7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Workload="ci--4081.3.6--n--6d8ceced70-k8s-goldmane--7c778bb748--lmhb6-eth0" Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.482 [INFO][5748] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.486229 containerd[1726]: 2026-01-28 01:23:19.484 [INFO][5741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae" Jan 28 01:23:19.486633 containerd[1726]: time="2026-01-28T01:23:19.486273021Z" level=info msg="TearDown network for sandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\" successfully" Jan 28 01:23:19.514417 containerd[1726]: time="2026-01-28T01:23:19.514372046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:23:19.514760 containerd[1726]: time="2026-01-28T01:23:19.514442846Z" level=info msg="RemovePodSandbox \"7276cd27ff29ba0b077f74be3db0e29037ef4b24216db49dc06ba061d1e6c5ae\" returns successfully" Jan 28 01:23:19.514966 containerd[1726]: time="2026-01-28T01:23:19.514941525Z" level=info msg="StopPodSandbox for \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\"" Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.547 [WARNING][5762] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b93794f0-c760-43f8-9817-c3814f113c55", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac", Pod:"coredns-66bc5c9577-lbbmn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8dddf1d8203", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.548 [INFO][5762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.548 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" iface="eth0" netns="" Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.548 [INFO][5762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.548 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.564 [INFO][5769] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" HandleID="k8s-pod-network.4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.564 [INFO][5769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.565 [INFO][5769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.573 [WARNING][5769] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" HandleID="k8s-pod-network.4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.573 [INFO][5769] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" HandleID="k8s-pod-network.4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.574 [INFO][5769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.577661 containerd[1726]: 2026-01-28 01:23:19.576 [INFO][5762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:19.578302 containerd[1726]: time="2026-01-28T01:23:19.578174972Z" level=info msg="TearDown network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\" successfully" Jan 28 01:23:19.578302 containerd[1726]: time="2026-01-28T01:23:19.578206892Z" level=info msg="StopPodSandbox for \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\" returns successfully" Jan 28 01:23:19.581037 containerd[1726]: time="2026-01-28T01:23:19.581012730Z" level=info msg="RemovePodSandbox for \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\"" Jan 28 01:23:19.581107 containerd[1726]: time="2026-01-28T01:23:19.581045850Z" level=info msg="Forcibly stopping sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\"" Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.618 [WARNING][5783] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b93794f0-c760-43f8-9817-c3814f113c55", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"7e184b2216a75591cf549e7634022dead5cf4ca1b2a90fdb5becc29193d913ac", Pod:"coredns-66bc5c9577-lbbmn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8dddf1d8203", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.618 [INFO][5783] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.618 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" iface="eth0" netns="" Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.618 [INFO][5783] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.619 [INFO][5783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.645 [INFO][5790] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" HandleID="k8s-pod-network.4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.646 [INFO][5790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.646 [INFO][5790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.656 [WARNING][5790] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" HandleID="k8s-pod-network.4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.656 [INFO][5790] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" HandleID="k8s-pod-network.4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Workload="ci--4081.3.6--n--6d8ceced70-k8s-coredns--66bc5c9577--lbbmn-eth0" Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.657 [INFO][5790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.661630 containerd[1726]: 2026-01-28 01:23:19.659 [INFO][5783] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f" Jan 28 01:23:19.662488 containerd[1726]: time="2026-01-28T01:23:19.661574328Z" level=info msg="TearDown network for sandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\" successfully" Jan 28 01:23:19.671176 containerd[1726]: time="2026-01-28T01:23:19.671017002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:23:19.671176 containerd[1726]: time="2026-01-28T01:23:19.671095722Z" level=info msg="RemovePodSandbox \"4c5a52930087810204ce994920b0cd8fc31b6d45400382896065837825340b4f\" returns successfully" Jan 28 01:23:19.671738 containerd[1726]: time="2026-01-28T01:23:19.671509242Z" level=info msg="StopPodSandbox for \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\"" Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.704 [WARNING][5804] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0", GenerateName:"calico-apiserver-69c4f6486c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1527d25-60e3-4960-9f63-e5d366bf57e5", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4f6486c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35", Pod:"calico-apiserver-69c4f6486c-snwn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92a2bbd3304", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.704 [INFO][5804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.704 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" iface="eth0" netns="" Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.704 [INFO][5804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.704 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.721 [INFO][5811] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" HandleID="k8s-pod-network.2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.721 [INFO][5811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.721 [INFO][5811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.730 [WARNING][5811] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" HandleID="k8s-pod-network.2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.730 [INFO][5811] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" HandleID="k8s-pod-network.2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.731 [INFO][5811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.735161 containerd[1726]: 2026-01-28 01:23:19.733 [INFO][5804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:19.735613 containerd[1726]: time="2026-01-28T01:23:19.735202168Z" level=info msg="TearDown network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\" successfully" Jan 28 01:23:19.735613 containerd[1726]: time="2026-01-28T01:23:19.735228288Z" level=info msg="StopPodSandbox for \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\" returns successfully" Jan 28 01:23:19.735851 containerd[1726]: time="2026-01-28T01:23:19.735804728Z" level=info msg="RemovePodSandbox for \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\"" Jan 28 01:23:19.735934 containerd[1726]: time="2026-01-28T01:23:19.735915408Z" level=info msg="Forcibly stopping sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\"" Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.769 [WARNING][5825] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0", GenerateName:"calico-apiserver-69c4f6486c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1527d25-60e3-4960-9f63-e5d366bf57e5", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4f6486c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-6d8ceced70", ContainerID:"f8ea0b1ecec130eac57080dc1e9b929a8fbcda09232fd18482d67884cff2df35", Pod:"calico-apiserver-69c4f6486c-snwn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92a2bbd3304", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.769 [INFO][5825] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.769 [INFO][5825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" iface="eth0" netns="" Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.769 [INFO][5825] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.769 [INFO][5825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.789 [INFO][5832] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" HandleID="k8s-pod-network.2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.789 [INFO][5832] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.789 [INFO][5832] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.798 [WARNING][5832] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" HandleID="k8s-pod-network.2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.798 [INFO][5832] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" HandleID="k8s-pod-network.2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Workload="ci--4081.3.6--n--6d8ceced70-k8s-calico--apiserver--69c4f6486c--snwn4-eth0" Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.800 [INFO][5832] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:23:19.817917 containerd[1726]: 2026-01-28 01:23:19.802 [INFO][5825] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee" Jan 28 01:23:19.818905 containerd[1726]: time="2026-01-28T01:23:19.818388604Z" level=info msg="TearDown network for sandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\" successfully" Jan 28 01:23:19.826626 containerd[1726]: time="2026-01-28T01:23:19.826588760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:23:19.826890 containerd[1726]: time="2026-01-28T01:23:19.826769680Z" level=info msg="RemovePodSandbox \"2eae2976cb505548a11746d6e456ec1a460f35fc53b69272d255a333a8701cee\" returns successfully" Jan 28 01:23:25.519899 containerd[1726]: time="2026-01-28T01:23:25.519862497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:23:25.758595 containerd[1726]: time="2026-01-28T01:23:25.758554466Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:25.762346 containerd[1726]: time="2026-01-28T01:23:25.762308864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:23:25.762439 containerd[1726]: time="2026-01-28T01:23:25.762401064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:23:25.762561 kubelet[3179]: E0128 01:23:25.762525 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:23:25.762887 kubelet[3179]: E0128 01:23:25.762572 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:23:25.762887 kubelet[3179]: E0128 01:23:25.762646 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dd96f4d7f-sqvjh_calico-system(0439c29d-4b7c-4f38-8c80-be3fa0839945): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:25.763797 containerd[1726]: time="2026-01-28T01:23:25.763741224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:23:26.011223 containerd[1726]: time="2026-01-28T01:23:26.011136109Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:26.014704 containerd[1726]: time="2026-01-28T01:23:26.014670547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:23:26.014784 containerd[1726]: time="2026-01-28T01:23:26.014765347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:23:26.015126 kubelet[3179]: E0128 01:23:26.014924 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:23:26.015126 kubelet[3179]: E0128 01:23:26.014971 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:23:26.015126 kubelet[3179]: E0128 01:23:26.015046 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dd96f4d7f-sqvjh_calico-system(0439c29d-4b7c-4f38-8c80-be3fa0839945): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:26.015264 kubelet[3179]: E0128 01:23:26.015081 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:23:26.522051 containerd[1726]: time="2026-01-28T01:23:26.519407314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:23:26.782963 containerd[1726]: time="2026-01-28T01:23:26.782844312Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:26.786267 containerd[1726]: time="2026-01-28T01:23:26.786179590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:23:26.786267 containerd[1726]: time="2026-01-28T01:23:26.786240630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:23:26.786853 kubelet[3179]: E0128 01:23:26.786381 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:26.786853 kubelet[3179]: E0128 01:23:26.786422 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:26.786853 kubelet[3179]: E0128 01:23:26.786501 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69c4f6486c-pzztc_calico-apiserver(e15a9a69-173f-490e-af7a-8a44d37eda4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:26.786853 kubelet[3179]: E0128 01:23:26.786531 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:23:28.523072 containerd[1726]: time="2026-01-28T01:23:28.522952626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:23:28.786644 containerd[1726]: time="2026-01-28T01:23:28.786333064Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:28.789040 containerd[1726]: time="2026-01-28T01:23:28.788982102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:23:28.789040 containerd[1726]: time="2026-01-28T01:23:28.789012582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:23:28.789284 kubelet[3179]: E0128 01:23:28.789236 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:28.790030 kubelet[3179]: E0128 01:23:28.789291 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:28.790030 kubelet[3179]: E0128 01:23:28.789373 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69c4f6486c-snwn4_calico-apiserver(e1527d25-60e3-4960-9f63-e5d366bf57e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:28.790030 kubelet[3179]: E0128 01:23:28.789403 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:23:29.520238 containerd[1726]: time="2026-01-28T01:23:29.520162404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:23:29.794810 containerd[1726]: time="2026-01-28T01:23:29.794635596Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:29.797974 containerd[1726]: time="2026-01-28T01:23:29.797924635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:23:29.798099 containerd[1726]: time="2026-01-28T01:23:29.797939515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:23:29.798195 kubelet[3179]: E0128 01:23:29.798150 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:23:29.798416 kubelet[3179]: E0128 01:23:29.798203 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:23:29.798416 kubelet[3179]: E0128 01:23:29.798367 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-56cc7cdcfb-z7vlh_calico-system(7ad0c2f8-bb34-49c9-a1bb-d618f47675e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:29.798416 kubelet[3179]: E0128 01:23:29.798400 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:23:29.799782 containerd[1726]: time="2026-01-28T01:23:29.799735594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:23:30.053130 containerd[1726]: time="2026-01-28T01:23:30.053012797Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:30.057113 containerd[1726]: time="2026-01-28T01:23:30.056989195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:23:30.057113 containerd[1726]: time="2026-01-28T01:23:30.057080235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:23:30.057266 kubelet[3179]: E0128 01:23:30.057231 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:23:30.057316 kubelet[3179]: E0128 01:23:30.057271 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:23:30.057456 kubelet[3179]: E0128 01:23:30.057426 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:30.058164 containerd[1726]: time="2026-01-28T01:23:30.058062114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:23:30.356462 containerd[1726]: time="2026-01-28T01:23:30.356221656Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:30.358741 containerd[1726]: time="2026-01-28T01:23:30.358645935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:23:30.358741 containerd[1726]: time="2026-01-28T01:23:30.358712335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:23:30.358928 kubelet[3179]: E0128 01:23:30.358854 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:23:30.358928 kubelet[3179]: E0128 01:23:30.358897 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:23:30.359416 kubelet[3179]: E0128 01:23:30.359096 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lmhb6_calico-system(1055b396-3282-41c6-8cd5-0cd8ecaec9e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:30.359416 kubelet[3179]: E0128 01:23:30.359137 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:23:30.359540 containerd[1726]: time="2026-01-28T01:23:30.359214175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:23:30.618911 containerd[1726]: time="2026-01-28T01:23:30.618764415Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:30.622577 containerd[1726]: time="2026-01-28T01:23:30.622400653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:23:30.622577 containerd[1726]: time="2026-01-28T01:23:30.622538893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:23:30.622859 kubelet[3179]: E0128 01:23:30.622813 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:23:30.622928 kubelet[3179]: E0128 01:23:30.622869 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:23:30.622957 kubelet[3179]: E0128 01:23:30.622938 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:30.623019 kubelet[3179]: E0128 01:23:30.622974 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:23:39.520061 kubelet[3179]: E0128 01:23:39.519865 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:23:41.520885 kubelet[3179]: E0128 01:23:41.519595 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:23:41.520885 kubelet[3179]: E0128 01:23:41.520161 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:23:42.520402 kubelet[3179]: E0128 01:23:42.519956 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:23:42.520402 kubelet[3179]: E0128 01:23:42.520281 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:23:44.522747 kubelet[3179]: E0128 01:23:44.522701 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:23:51.520476 containerd[1726]: time="2026-01-28T01:23:51.520220082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:23:51.840512 containerd[1726]: time="2026-01-28T01:23:51.840191584Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:51.842854 containerd[1726]: time="2026-01-28T01:23:51.842774782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:23:51.842854 containerd[1726]: time="2026-01-28T01:23:51.842823902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:23:51.843079 kubelet[3179]: E0128 01:23:51.843032 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:23:51.843079 kubelet[3179]: E0128 01:23:51.843074 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:23:51.843364 kubelet[3179]: E0128 01:23:51.843146 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dd96f4d7f-sqvjh_calico-system(0439c29d-4b7c-4f38-8c80-be3fa0839945): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:51.844496 containerd[1726]: time="2026-01-28T01:23:51.844217941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:23:52.088141 containerd[1726]: time="2026-01-28T01:23:52.088097446Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:52.090919 containerd[1726]: time="2026-01-28T01:23:52.090810324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:23:52.090976 containerd[1726]: time="2026-01-28T01:23:52.090929764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:23:52.091161 kubelet[3179]: E0128 01:23:52.091123 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:23:52.091216 kubelet[3179]: E0128 01:23:52.091171 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:23:52.091429 kubelet[3179]: E0128 01:23:52.091239 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dd96f4d7f-sqvjh_calico-system(0439c29d-4b7c-4f38-8c80-be3fa0839945): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:52.091429 kubelet[3179]: E0128 01:23:52.091283 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:23:54.524528 containerd[1726]: time="2026-01-28T01:23:54.524481492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:23:54.785979 containerd[1726]: time="2026-01-28T01:23:54.785718026Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:54.788530 containerd[1726]: time="2026-01-28T01:23:54.788430345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:23:54.788530 containerd[1726]: time="2026-01-28T01:23:54.788487905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:23:54.788721 kubelet[3179]: E0128 01:23:54.788684 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:54.789007 kubelet[3179]: E0128 01:23:54.788729 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:54.789007 kubelet[3179]: E0128 01:23:54.788908 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69c4f6486c-snwn4_calico-apiserver(e1527d25-60e3-4960-9f63-e5d366bf57e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:54.789007 kubelet[3179]: E0128 01:23:54.788941 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:23:54.789900 containerd[1726]: time="2026-01-28T01:23:54.789872104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:23:55.023597 containerd[1726]: time="2026-01-28T01:23:55.023488094Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:55.027502 containerd[1726]: time="2026-01-28T01:23:55.026365133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:23:55.027502 containerd[1726]: time="2026-01-28T01:23:55.026392853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:23:55.027653 kubelet[3179]: E0128 01:23:55.026956 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:23:55.027653 kubelet[3179]: E0128 01:23:55.026998 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:23:55.027653 kubelet[3179]: E0128 01:23:55.027177 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lmhb6_calico-system(1055b396-3282-41c6-8cd5-0cd8ecaec9e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:55.027653 kubelet[3179]: E0128 01:23:55.027211 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:23:55.028986 containerd[1726]: time="2026-01-28T01:23:55.028734611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:23:55.293543 containerd[1726]: time="2026-01-28T01:23:55.293490264Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:55.296504 containerd[1726]: time="2026-01-28T01:23:55.296455782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:23:55.297258 containerd[1726]: time="2026-01-28T01:23:55.296570142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:23:55.297313 kubelet[3179]: E0128 01:23:55.296661 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:23:55.297313 kubelet[3179]: E0128 01:23:55.296702 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:23:55.297313 kubelet[3179]: E0128 01:23:55.296874 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-56cc7cdcfb-z7vlh_calico-system(7ad0c2f8-bb34-49c9-a1bb-d618f47675e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:55.297313 kubelet[3179]: E0128 01:23:55.296907 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:23:55.297787 containerd[1726]: time="2026-01-28T01:23:55.297617782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:23:55.542230 containerd[1726]: time="2026-01-28T01:23:55.542189166Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:55.549002 containerd[1726]: time="2026-01-28T01:23:55.548677522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:23:55.549002 containerd[1726]: time="2026-01-28T01:23:55.548736962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:23:55.550950 kubelet[3179]: E0128 01:23:55.548909 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:55.550950 kubelet[3179]: E0128 01:23:55.548954 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:23:55.550950 kubelet[3179]: E0128 01:23:55.549162 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69c4f6486c-pzztc_calico-apiserver(e15a9a69-173f-490e-af7a-8a44d37eda4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:55.550950 kubelet[3179]: E0128 01:23:55.549193 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:23:55.551098 containerd[1726]: time="2026-01-28T01:23:55.549539802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:23:55.806621 containerd[1726]: time="2026-01-28T01:23:55.806503517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:55.809744 containerd[1726]: time="2026-01-28T01:23:55.809296676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:23:55.809744 containerd[1726]: time="2026-01-28T01:23:55.809403836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:23:55.809887 kubelet[3179]: E0128 01:23:55.809525 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:23:55.809887 kubelet[3179]: E0128 01:23:55.809570 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:23:55.809887 kubelet[3179]: E0128 01:23:55.809635 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:55.811960 containerd[1726]: time="2026-01-28T01:23:55.811932595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:23:56.085615 containerd[1726]: time="2026-01-28T01:23:56.085501475Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:23:56.088397 containerd[1726]: time="2026-01-28T01:23:56.088348234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:23:56.088512 containerd[1726]: time="2026-01-28T01:23:56.088454594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:23:56.088627 kubelet[3179]: E0128 01:23:56.088587 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:23:56.088683 kubelet[3179]: E0128 01:23:56.088635 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:23:56.088727 kubelet[3179]: E0128 01:23:56.088706 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:23:56.088791 kubelet[3179]: E0128 01:23:56.088752 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:24:04.284213 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:60094.service - OpenSSH per-connection server daemon (10.200.16.10:60094). Jan 28 01:24:04.523316 kubelet[3179]: E0128 01:24:04.523277 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:24:04.778156 sshd[5895]: Accepted publickey for core from 10.200.16.10 port 60094 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:04.780680 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:04.786414 systemd-logind[1690]: New session 10 of user core. Jan 28 01:24:04.794037 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:24:05.223673 sshd[5895]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:05.227610 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:60094.service: Deactivated successfully. Jan 28 01:24:05.231011 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:24:05.234036 systemd-logind[1690]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:24:05.235053 systemd-logind[1690]: Removed session 10. Jan 28 01:24:05.520158 kubelet[3179]: E0128 01:24:05.519505 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:24:07.518802 kubelet[3179]: E0128 01:24:07.518755 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:24:08.715411 systemd[1]: run-containerd-runc-k8s.io-1b4c30c82aa5d88edab6f9afab89cbdb6f207365a307f1c57c1caa72ff6cec3e-runc.lAMaeh.mount: Deactivated successfully. Jan 28 01:24:09.521872 kubelet[3179]: E0128 01:24:09.519440 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:24:09.523531 kubelet[3179]: E0128 01:24:09.522461 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:24:10.305043 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:48126.service - OpenSSH per-connection server daemon (10.200.16.10:48126). Jan 28 01:24:10.521146 kubelet[3179]: E0128 01:24:10.520817 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:24:10.770866 sshd[5931]: Accepted publickey for core from 10.200.16.10 port 48126 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:10.772242 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:10.778671 systemd-logind[1690]: New session 11 of user core. Jan 28 01:24:10.782992 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:24:11.186171 sshd[5931]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:11.191740 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:48126.service: Deactivated successfully. Jan 28 01:24:11.195059 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:24:11.195825 systemd-logind[1690]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:24:11.197235 systemd-logind[1690]: Removed session 11. Jan 28 01:24:16.274217 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:48140.service - OpenSSH per-connection server daemon (10.200.16.10:48140). Jan 28 01:24:16.525097 kubelet[3179]: E0128 01:24:16.524968 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:24:16.767744 sshd[5946]: Accepted publickey for core from 10.200.16.10 port 48140 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:16.768762 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:16.776343 systemd-logind[1690]: New session 12 of user core. Jan 28 01:24:16.781261 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:24:17.205988 sshd[5946]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:17.209024 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:48140.service: Deactivated successfully. Jan 28 01:24:17.213230 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:24:17.216631 systemd-logind[1690]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:24:17.218310 systemd-logind[1690]: Removed session 12. Jan 28 01:24:17.289158 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:48148.service - OpenSSH per-connection server daemon (10.200.16.10:48148). Jan 28 01:24:17.740402 sshd[5966]: Accepted publickey for core from 10.200.16.10 port 48148 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:17.742916 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:17.750059 systemd-logind[1690]: New session 13 of user core. Jan 28 01:24:17.752027 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:24:18.181351 sshd[5966]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:18.188481 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:48148.service: Deactivated successfully. Jan 28 01:24:18.188749 systemd-logind[1690]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:24:18.195703 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:24:18.200163 systemd-logind[1690]: Removed session 13. Jan 28 01:24:18.279138 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:48150.service - OpenSSH per-connection server daemon (10.200.16.10:48150). Jan 28 01:24:18.775159 sshd[5977]: Accepted publickey for core from 10.200.16.10 port 48150 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:18.777492 sshd[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:18.784523 systemd-logind[1690]: New session 14 of user core. Jan 28 01:24:18.790997 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:24:19.193635 sshd[5977]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:19.198225 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:48150.service: Deactivated successfully. Jan 28 01:24:19.202169 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:24:19.203593 systemd-logind[1690]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:24:19.204516 systemd-logind[1690]: Removed session 14. Jan 28 01:24:20.523155 kubelet[3179]: E0128 01:24:20.520518 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:24:20.523155 kubelet[3179]: E0128 01:24:20.521098 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:24:22.526665 kubelet[3179]: E0128 01:24:22.526619 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:24:24.285112 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:52824.service - OpenSSH per-connection server daemon (10.200.16.10:52824). Jan 28 01:24:24.521740 kubelet[3179]: E0128 01:24:24.521704 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:24:24.522329 kubelet[3179]: E0128 01:24:24.522294 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:24:24.737175 sshd[5996]: Accepted publickey for core from 10.200.16.10 port 52824 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:24.739003 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:24.745574 systemd-logind[1690]: New session 15 of user core. Jan 28 01:24:24.750015 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:24:25.169735 sshd[5996]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:25.173788 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:52824.service: Deactivated successfully. Jan 28 01:24:25.177488 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:24:25.178758 systemd-logind[1690]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:24:25.180101 systemd-logind[1690]: Removed session 15. Jan 28 01:24:28.521093 kubelet[3179]: E0128 01:24:28.521024 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:24:30.270096 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:44458.service - OpenSSH per-connection server daemon (10.200.16.10:44458). Jan 28 01:24:30.720853 sshd[6011]: Accepted publickey for core from 10.200.16.10 port 44458 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:30.721810 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:30.728452 systemd-logind[1690]: New session 16 of user core. Jan 28 01:24:30.734454 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:24:31.131885 sshd[6011]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:31.138153 systemd-logind[1690]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:24:31.138353 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:44458.service: Deactivated successfully. Jan 28 01:24:31.141024 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:24:31.141777 systemd-logind[1690]: Removed session 16. Jan 28 01:24:33.519624 kubelet[3179]: E0128 01:24:33.518748 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:24:33.520624 kubelet[3179]: E0128 01:24:33.520315 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:24:36.221113 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:44472.service - OpenSSH per-connection server daemon (10.200.16.10:44472). Jan 28 01:24:36.524326 containerd[1726]: time="2026-01-28T01:24:36.524052582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:24:36.669518 sshd[6031]: Accepted publickey for core from 10.200.16.10 port 44472 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:36.670911 sshd[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:36.675395 systemd-logind[1690]: New session 17 of user core. Jan 28 01:24:36.682034 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:24:36.767216 containerd[1726]: time="2026-01-28T01:24:36.767158890Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:24:36.772797 containerd[1726]: time="2026-01-28T01:24:36.771794968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:24:36.772797 containerd[1726]: time="2026-01-28T01:24:36.771930168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:24:36.773751 kubelet[3179]: E0128 01:24:36.773185 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:24:36.773751 kubelet[3179]: E0128 01:24:36.773227 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:24:36.773751 kubelet[3179]: E0128 01:24:36.773405 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69c4f6486c-pzztc_calico-apiserver(e15a9a69-173f-490e-af7a-8a44d37eda4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:24:36.773751 kubelet[3179]: E0128 01:24:36.773437 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:24:36.776289 containerd[1726]: time="2026-01-28T01:24:36.773623647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:24:37.035175 containerd[1726]: time="2026-01-28T01:24:37.034947708Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:24:37.038758 containerd[1726]: time="2026-01-28T01:24:37.038711066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:24:37.038867 containerd[1726]: time="2026-01-28T01:24:37.038809106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:24:37.039158 kubelet[3179]: E0128 01:24:37.038967 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:24:37.039158 kubelet[3179]: E0128 01:24:37.039024 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:24:37.039158 kubelet[3179]: E0128 01:24:37.039096 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lmhb6_calico-system(1055b396-3282-41c6-8cd5-0cd8ecaec9e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:24:37.039158 kubelet[3179]: E0128 01:24:37.039130 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:24:37.081955 sshd[6031]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:37.086057 systemd-logind[1690]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:24:37.086234 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:44472.service: Deactivated successfully. Jan 28 01:24:37.088544 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:24:37.090696 systemd-logind[1690]: Removed session 17. Jan 28 01:24:37.177813 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:44480.service - OpenSSH per-connection server daemon (10.200.16.10:44480). Jan 28 01:24:37.671868 sshd[6046]: Accepted publickey for core from 10.200.16.10 port 44480 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:37.674421 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:37.680652 systemd-logind[1690]: New session 18 of user core. Jan 28 01:24:37.684010 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:24:38.254353 sshd[6046]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:38.257821 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:44480.service: Deactivated successfully. Jan 28 01:24:38.259574 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:24:38.260217 systemd-logind[1690]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:24:38.261226 systemd-logind[1690]: Removed session 18. Jan 28 01:24:38.297474 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:44492.service - OpenSSH per-connection server daemon (10.200.16.10:44492). Jan 28 01:24:38.721131 systemd[1]: run-containerd-runc-k8s.io-1b4c30c82aa5d88edab6f9afab89cbdb6f207365a307f1c57c1caa72ff6cec3e-runc.Amr7EC.mount: Deactivated successfully. Jan 28 01:24:38.812389 sshd[6056]: Accepted publickey for core from 10.200.16.10 port 44492 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:38.813302 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:38.820002 systemd-logind[1690]: New session 19 of user core. Jan 28 01:24:38.825036 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:24:39.519959 containerd[1726]: time="2026-01-28T01:24:39.519706082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:24:39.791974 containerd[1726]: time="2026-01-28T01:24:39.791699538Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:24:39.794445 containerd[1726]: time="2026-01-28T01:24:39.794300777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:24:39.794445 containerd[1726]: time="2026-01-28T01:24:39.794412737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:24:39.794622 kubelet[3179]: E0128 01:24:39.794547 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:24:39.795823 kubelet[3179]: E0128 01:24:39.794610 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:24:39.795823 kubelet[3179]: E0128 01:24:39.794687 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:24:39.796236 containerd[1726]: time="2026-01-28T01:24:39.796041137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:24:39.903066 sshd[6056]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:39.907261 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:44492.service: Deactivated successfully. Jan 28 01:24:39.909140 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:24:39.909930 systemd-logind[1690]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:24:39.910822 systemd-logind[1690]: Removed session 19. Jan 28 01:24:39.988237 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:38118.service - OpenSSH per-connection server daemon (10.200.16.10:38118). Jan 28 01:24:40.080429 containerd[1726]: time="2026-01-28T01:24:40.079949909Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:24:40.082570 containerd[1726]: time="2026-01-28T01:24:40.082526668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:24:40.082646 containerd[1726]: time="2026-01-28T01:24:40.082630988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:24:40.082801 kubelet[3179]: E0128 01:24:40.082761 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:24:40.082880 kubelet[3179]: E0128 01:24:40.082812 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:24:40.083135 kubelet[3179]: E0128 01:24:40.082911 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s8nm6_calico-system(f68d28e5-4350-4cc7-aede-a307338915a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:24:40.083135 kubelet[3179]: E0128 01:24:40.082966 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:24:40.480615 sshd[6101]: Accepted publickey for core from 10.200.16.10 port 38118 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:40.482129 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:40.486932 systemd-logind[1690]: New session 20 of user core. Jan 28 01:24:40.496022 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:24:41.078924 sshd[6101]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:41.083272 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:38118.service: Deactivated successfully. Jan 28 01:24:41.086642 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:24:41.087649 systemd-logind[1690]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:24:41.088641 systemd-logind[1690]: Removed session 20. Jan 28 01:24:41.173199 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:38122.service - OpenSSH per-connection server daemon (10.200.16.10:38122). Jan 28 01:24:41.656910 sshd[6114]: Accepted publickey for core from 10.200.16.10 port 38122 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:41.658246 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:41.663150 systemd-logind[1690]: New session 21 of user core. Jan 28 01:24:41.668158 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:24:42.093059 sshd[6114]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:42.096335 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:38122.service: Deactivated successfully. Jan 28 01:24:42.099830 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:24:42.102307 systemd-logind[1690]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:24:42.104905 systemd-logind[1690]: Removed session 21. Jan 28 01:24:43.521576 containerd[1726]: time="2026-01-28T01:24:43.521342811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:24:43.817167 containerd[1726]: time="2026-01-28T01:24:43.816931619Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:24:43.819494 containerd[1726]: time="2026-01-28T01:24:43.819350018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:24:43.819494 containerd[1726]: time="2026-01-28T01:24:43.819463698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:24:43.820495 kubelet[3179]: E0128 01:24:43.819727 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:24:43.820495 kubelet[3179]: E0128 01:24:43.819770 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:24:43.820495 kubelet[3179]: E0128 01:24:43.819854 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dd96f4d7f-sqvjh_calico-system(0439c29d-4b7c-4f38-8c80-be3fa0839945): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:24:43.821608 containerd[1726]: time="2026-01-28T01:24:43.821379417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:24:44.098204 containerd[1726]: time="2026-01-28T01:24:44.097733076Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:24:44.101067 containerd[1726]: time="2026-01-28T01:24:44.100908275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:24:44.101067 containerd[1726]: time="2026-01-28T01:24:44.101029954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:24:44.101543 kubelet[3179]: E0128 01:24:44.101364 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:24:44.101543 kubelet[3179]: E0128 01:24:44.101494 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:24:44.101779 kubelet[3179]: E0128 01:24:44.101666 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dd96f4d7f-sqvjh_calico-system(0439c29d-4b7c-4f38-8c80-be3fa0839945): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:24:44.102063 kubelet[3179]: E0128 01:24:44.101929 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:24:45.519195 containerd[1726]: time="2026-01-28T01:24:45.519051434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:24:45.818457 containerd[1726]: time="2026-01-28T01:24:45.818121482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:24:45.820809 containerd[1726]: time="2026-01-28T01:24:45.820774200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:24:45.820897 containerd[1726]: time="2026-01-28T01:24:45.820878040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:24:45.821053 kubelet[3179]: E0128 01:24:45.821014 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:24:45.821333 kubelet[3179]: E0128 01:24:45.821064 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:24:45.821333 kubelet[3179]: E0128 01:24:45.821135 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69c4f6486c-snwn4_calico-apiserver(e1527d25-60e3-4960-9f63-e5d366bf57e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:24:45.821333 kubelet[3179]: E0128 01:24:45.821166 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:24:47.184096 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:38134.service - OpenSSH per-connection server daemon (10.200.16.10:38134). Jan 28 01:24:47.519712 containerd[1726]: time="2026-01-28T01:24:47.519420257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:24:47.632213 sshd[6152]: Accepted publickey for core from 10.200.16.10 port 38134 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:47.633133 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:47.639547 systemd-logind[1690]: New session 22 of user core. Jan 28 01:24:47.648567 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:24:47.798958 containerd[1726]: time="2026-01-28T01:24:47.798792795Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:24:47.801677 containerd[1726]: time="2026-01-28T01:24:47.801569874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:24:47.801762 containerd[1726]: time="2026-01-28T01:24:47.801647314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:24:47.801923 kubelet[3179]: E0128 01:24:47.801880 3179 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:24:47.802253 kubelet[3179]: E0128 01:24:47.801932 3179 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:24:47.802253 kubelet[3179]: E0128 01:24:47.802005 3179 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-56cc7cdcfb-z7vlh_calico-system(7ad0c2f8-bb34-49c9-a1bb-d618f47675e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:24:47.802253 kubelet[3179]: E0128 01:24:47.802047 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:24:48.030224 sshd[6152]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:48.035101 systemd-logind[1690]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:24:48.035418 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:38134.service: Deactivated successfully. Jan 28 01:24:48.037077 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:24:48.038045 systemd-logind[1690]: Removed session 22. Jan 28 01:24:48.529680 kubelet[3179]: E0128 01:24:48.529611 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:24:52.520858 kubelet[3179]: E0128 01:24:52.519047 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:24:53.116931 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:46950.service - OpenSSH per-connection server daemon (10.200.16.10:46950). Jan 28 01:24:53.569143 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 46950 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:53.570622 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:53.575271 systemd-logind[1690]: New session 23 of user core. Jan 28 01:24:53.580981 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:24:53.961860 sshd[6165]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:53.965121 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:46950.service: Deactivated successfully. Jan 28 01:24:53.966938 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:24:53.968058 systemd-logind[1690]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:24:53.969286 systemd-logind[1690]: Removed session 23. Jan 28 01:24:55.522872 kubelet[3179]: E0128 01:24:55.520870 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:24:56.522866 kubelet[3179]: E0128 01:24:56.522671 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:24:58.524850 kubelet[3179]: E0128 01:24:58.524328 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:24:59.060827 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:46964.service - OpenSSH per-connection server daemon (10.200.16.10:46964). Jan 28 01:24:59.520481 kubelet[3179]: E0128 01:24:59.519613 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5" Jan 28 01:24:59.549638 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 46964 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:24:59.551123 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:59.555913 systemd-logind[1690]: New session 24 of user core. Jan 28 01:24:59.561989 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:24:59.992052 sshd[6180]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:59.996222 systemd-logind[1690]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:24:59.997095 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:46964.service: Deactivated successfully. Jan 28 01:25:00.002208 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:25:00.003499 systemd-logind[1690]: Removed session 24. Jan 28 01:25:03.518605 kubelet[3179]: E0128 01:25:03.518530 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-pzztc" podUID="e15a9a69-173f-490e-af7a-8a44d37eda4d" Jan 28 01:25:04.524897 kubelet[3179]: E0128 01:25:04.524079 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lmhb6" podUID="1055b396-3282-41c6-8cd5-0cd8ecaec9e4" Jan 28 01:25:05.077138 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:48358.service - OpenSSH per-connection server daemon (10.200.16.10:48358). Jan 28 01:25:05.567080 sshd[6193]: Accepted publickey for core from 10.200.16.10 port 48358 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:25:05.568369 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:05.572677 systemd-logind[1690]: New session 25 of user core. Jan 28 01:25:05.577000 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:25:06.016124 sshd[6193]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:06.020208 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:48358.service: Deactivated successfully. Jan 28 01:25:06.023228 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:25:06.026977 systemd-logind[1690]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:25:06.029265 systemd-logind[1690]: Removed session 25. Jan 28 01:25:08.720650 systemd[1]: run-containerd-runc-k8s.io-1b4c30c82aa5d88edab6f9afab89cbdb6f207365a307f1c57c1caa72ff6cec3e-runc.fe7lBo.mount: Deactivated successfully. Jan 28 01:25:09.520152 kubelet[3179]: E0128 01:25:09.519661 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s8nm6" podUID="f68d28e5-4350-4cc7-aede-a307338915a7" Jan 28 01:25:09.520152 kubelet[3179]: E0128 01:25:09.519766 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd96f4d7f-sqvjh" podUID="0439c29d-4b7c-4f38-8c80-be3fa0839945" Jan 28 01:25:10.524364 kubelet[3179]: E0128 01:25:10.524071 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4f6486c-snwn4" podUID="e1527d25-60e3-4960-9f63-e5d366bf57e5" Jan 28 01:25:11.100769 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.16.10:34114.service - OpenSSH per-connection server daemon (10.200.16.10:34114). Jan 28 01:25:11.555873 sshd[6229]: Accepted publickey for core from 10.200.16.10 port 34114 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:25:11.571386 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:11.575370 systemd-logind[1690]: New session 26 of user core. Jan 28 01:25:11.581960 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:25:11.959972 sshd[6229]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:11.963794 systemd[1]: sshd@23-10.200.20.12:22-10.200.16.10:34114.service: Deactivated successfully. Jan 28 01:25:11.965816 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:25:11.966651 systemd-logind[1690]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:25:11.967567 systemd-logind[1690]: Removed session 26. Jan 28 01:25:12.525320 kubelet[3179]: E0128 01:25:12.525279 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56cc7cdcfb-z7vlh" podUID="7ad0c2f8-bb34-49c9-a1bb-d618f47675e5"